Test Report: Docker_Linux_crio 20107

                    
                      8d7d309004e1c5aed2c11e9a2f72e102a81e4e45:2024-12-16:37505
                    
                

Test fail (2/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 152.57
38 TestAddons/parallel/MetricsServer 359.69
x
+
TestAddons/parallel/Ingress (152.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-109663 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-109663 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-109663 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0c78f186-ad36-4191-83a7-fc36688df669] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0c78f186-ad36-4191-83a7-fc36688df669] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003048741s
I1216 10:35:31.764473  847292 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-109663 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.513370222s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-109663 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-109663
helpers_test.go:235: (dbg) docker inspect addons-109663:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1a5d30b35ebd94d45b4b20f053e9d801bffac6feb46db54b983452fbca50984b",
	        "Created": "2024-12-16T10:32:42.208735109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 849348,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-16T10:32:42.321849535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7036ee4d70b7e266f67949e27a52ed21246dbdde9902b1d29235748548d311cb",
	        "ResolvConfPath": "/var/lib/docker/containers/1a5d30b35ebd94d45b4b20f053e9d801bffac6feb46db54b983452fbca50984b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1a5d30b35ebd94d45b4b20f053e9d801bffac6feb46db54b983452fbca50984b/hostname",
	        "HostsPath": "/var/lib/docker/containers/1a5d30b35ebd94d45b4b20f053e9d801bffac6feb46db54b983452fbca50984b/hosts",
	        "LogPath": "/var/lib/docker/containers/1a5d30b35ebd94d45b4b20f053e9d801bffac6feb46db54b983452fbca50984b/1a5d30b35ebd94d45b4b20f053e9d801bffac6feb46db54b983452fbca50984b-json.log",
	        "Name": "/addons-109663",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-109663:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-109663",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65804e8ecf53a4a783bcbd11ff1ee57774d652a79d14faa51abcf74021f9f0a6-init/diff:/var/lib/docker/overlay2/123e2f1df366b4ca43a26782c77043f0e4cd5c6388fa90b6b3300da767616189/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65804e8ecf53a4a783bcbd11ff1ee57774d652a79d14faa51abcf74021f9f0a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65804e8ecf53a4a783bcbd11ff1ee57774d652a79d14faa51abcf74021f9f0a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65804e8ecf53a4a783bcbd11ff1ee57774d652a79d14faa51abcf74021f9f0a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-109663",
	                "Source": "/var/lib/docker/volumes/addons-109663/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-109663",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-109663",
	                "name.minikube.sigs.k8s.io": "addons-109663",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6c0a3e7ab167392b4b1457a91806bd78d3a67f0fd8e01a37251db9ff03c74d5d",
	            "SandboxKey": "/var/run/docker/netns/6c0a3e7ab167",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-109663": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8d8d19425ae9a0d7e09aa1deae754ccc44dc321a7589581cd2cc49ee9d8127e2",
	                    "EndpointID": "1fd1a8fade4259280c934e3bd3078705e00cc2e63230df4f97442f57a51b046a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-109663",
	                        "1a5d30b35ebd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-109663 -n addons-109663
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-109663 logs -n 25: (1.09121414s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-505735                                                                     | download-only-505735   | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC | 16 Dec 24 10:32 UTC |
	| start   | --download-only -p                                                                          | download-docker-072674 | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC |                     |
	|         | download-docker-072674                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-072674                                                                   | download-docker-072674 | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC | 16 Dec 24 10:32 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-516574   | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC |                     |
	|         | binary-mirror-516574                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32893                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-516574                                                                     | binary-mirror-516574   | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC | 16 Dec 24 10:32 UTC |
	| addons  | enable dashboard -p                                                                         | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC |                     |
	|         | addons-109663                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC |                     |
	|         | addons-109663                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-109663 --wait=true                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC | 16 Dec 24 10:34 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:35 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | -p addons-109663                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-109663 addons                                                                        | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-109663 ssh cat                                                                       | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | /opt/local-path-provisioner/pvc-9e504c9a-bb3a-4229-9525-d31715212760_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-109663 ip                                                                            | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-109663 addons                                                                        | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-109663 addons                                                                        | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-109663 ssh curl -s                                                                   | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-109663 addons                                                                        | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:36 UTC | 16 Dec 24 10:36 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-109663 addons                                                                        | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:36 UTC | 16 Dec 24 10:36 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-109663 ip                                                                            | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:37 UTC | 16 Dec 24 10:37 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 10:32:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 10:32:20.176960  848599 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:32:20.177056  848599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:32:20.177064  848599 out.go:358] Setting ErrFile to fd 2...
	I1216 10:32:20.177068  848599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:32:20.177239  848599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
	I1216 10:32:20.177825  848599 out.go:352] Setting JSON to false
	I1216 10:32:20.178694  848599 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11687,"bootTime":1734333453,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 10:32:20.178790  848599 start.go:139] virtualization: kvm guest
	I1216 10:32:20.180687  848599 out.go:177] * [addons-109663] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 10:32:20.182103  848599 notify.go:220] Checking for updates...
	I1216 10:32:20.182122  848599 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 10:32:20.183273  848599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 10:32:20.184504  848599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig
	I1216 10:32:20.185694  848599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube
	I1216 10:32:20.186976  848599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 10:32:20.188067  848599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 10:32:20.189305  848599 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 10:32:20.210238  848599 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 10:32:20.210385  848599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 10:32:20.255369  848599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-16 10:32:20.24671902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 10:32:20.255520  848599 docker.go:318] overlay module found
	I1216 10:32:20.257178  848599 out.go:177] * Using the docker driver based on user configuration
	I1216 10:32:20.258429  848599 start.go:297] selected driver: docker
	I1216 10:32:20.258449  848599 start.go:901] validating driver "docker" against <nil>
	I1216 10:32:20.258461  848599 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 10:32:20.259277  848599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 10:32:20.303533  848599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-16 10:32:20.295513369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 10:32:20.303701  848599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 10:32:20.303936  848599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 10:32:20.305297  848599 out.go:177] * Using Docker driver with root privileges
	I1216 10:32:20.306405  848599 cni.go:84] Creating CNI manager for ""
	I1216 10:32:20.306461  848599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 10:32:20.306471  848599 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 10:32:20.306562  848599 start.go:340] cluster config:
	{Name:addons-109663 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-109663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 10:32:20.307714  848599 out.go:177] * Starting "addons-109663" primary control-plane node in "addons-109663" cluster
	I1216 10:32:20.308731  848599 cache.go:121] Beginning downloading kic base image for docker with crio
	I1216 10:32:20.309955  848599 out.go:177] * Pulling base image v0.0.45-1733912881-20083 ...
	I1216 10:32:20.311129  848599 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 10:32:20.311157  848599 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-840384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1216 10:32:20.311163  848599 cache.go:56] Caching tarball of preloaded images
	I1216 10:32:20.311160  848599 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local docker daemon
	I1216 10:32:20.311232  848599 preload.go:172] Found /home/jenkins/minikube-integration/20107-840384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 10:32:20.311243  848599 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1216 10:32:20.311587  848599 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/config.json ...
	I1216 10:32:20.311614  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/config.json: {Name:mkeda270ee12e3e9c2b3f96211254f0d67bf6da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:20.325703  848599 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 to local cache
	I1216 10:32:20.325805  848599 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory
	I1216 10:32:20.325824  848599 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory, skipping pull
	I1216 10:32:20.325831  848599 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 exists in cache, skipping pull
	I1216 10:32:20.325842  848599 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 as a tarball
	I1216 10:32:20.325853  848599 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 from local cache
	I1216 10:32:32.270421  848599 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 from cached tarball
	I1216 10:32:32.270469  848599 cache.go:194] Successfully downloaded all kic artifacts
	I1216 10:32:32.270526  848599 start.go:360] acquireMachinesLock for addons-109663: {Name:mk322ac902230420e2cfa3c4d031bb3cb0c61bc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 10:32:32.270650  848599 start.go:364] duration metric: took 96.592µs to acquireMachinesLock for "addons-109663"
	I1216 10:32:32.270692  848599 start.go:93] Provisioning new machine with config: &{Name:addons-109663 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-109663 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 10:32:32.270785  848599 start.go:125] createHost starting for "" (driver="docker")
	I1216 10:32:32.272482  848599 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1216 10:32:32.272745  848599 start.go:159] libmachine.API.Create for "addons-109663" (driver="docker")
	I1216 10:32:32.272789  848599 client.go:168] LocalClient.Create starting
	I1216 10:32:32.272894  848599 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca.pem
	I1216 10:32:32.524572  848599 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/cert.pem
	I1216 10:32:32.623176  848599 cli_runner.go:164] Run: docker network inspect addons-109663 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 10:32:32.639140  848599 cli_runner.go:211] docker network inspect addons-109663 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 10:32:32.639223  848599 network_create.go:284] running [docker network inspect addons-109663] to gather additional debugging logs...
	I1216 10:32:32.639249  848599 cli_runner.go:164] Run: docker network inspect addons-109663
	W1216 10:32:32.654825  848599 cli_runner.go:211] docker network inspect addons-109663 returned with exit code 1
	I1216 10:32:32.654856  848599 network_create.go:287] error running [docker network inspect addons-109663]: docker network inspect addons-109663: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-109663 not found
	I1216 10:32:32.654870  848599 network_create.go:289] output of [docker network inspect addons-109663]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-109663 not found
	
	** /stderr **
	I1216 10:32:32.654959  848599 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 10:32:32.670365  848599 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004f4fa0}
	I1216 10:32:32.670414  848599 network_create.go:124] attempt to create docker network addons-109663 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 10:32:32.670452  848599 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-109663 addons-109663
	I1216 10:32:32.728359  848599 network_create.go:108] docker network addons-109663 192.168.49.0/24 created
	I1216 10:32:32.728388  848599 kic.go:121] calculated static IP "192.168.49.2" for the "addons-109663" container
	I1216 10:32:32.728453  848599 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 10:32:32.743748  848599 cli_runner.go:164] Run: docker volume create addons-109663 --label name.minikube.sigs.k8s.io=addons-109663 --label created_by.minikube.sigs.k8s.io=true
	I1216 10:32:32.759894  848599 oci.go:103] Successfully created a docker volume addons-109663
	I1216 10:32:32.759977  848599 cli_runner.go:164] Run: docker run --rm --name addons-109663-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-109663 --entrypoint /usr/bin/test -v addons-109663:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -d /var/lib
	I1216 10:32:37.657657  848599 cli_runner.go:217] Completed: docker run --rm --name addons-109663-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-109663 --entrypoint /usr/bin/test -v addons-109663:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -d /var/lib: (4.897631821s)
	I1216 10:32:37.657697  848599 oci.go:107] Successfully prepared a docker volume addons-109663
	I1216 10:32:37.657718  848599 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 10:32:37.657747  848599 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 10:32:37.657821  848599 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20107-840384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-109663:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 10:32:42.147706  848599 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20107-840384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-109663:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -I lz4 -xf /preloaded.tar -C /extractDir: (4.489834787s)
	I1216 10:32:42.147741  848599 kic.go:203] duration metric: took 4.489992007s to extract preloaded images to volume ...
	W1216 10:32:42.147865  848599 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 10:32:42.147983  848599 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 10:32:42.194676  848599 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-109663 --name addons-109663 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-109663 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-109663 --network addons-109663 --ip 192.168.49.2 --volume addons-109663:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2
	I1216 10:32:42.492595  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Running}}
	I1216 10:32:42.509777  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:32:42.526135  848599 cli_runner.go:164] Run: docker exec addons-109663 stat /var/lib/dpkg/alternatives/iptables
	I1216 10:32:42.563632  848599 oci.go:144] the created container "addons-109663" has a running status.
	I1216 10:32:42.563664  848599 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa...
	I1216 10:32:42.655608  848599 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 10:32:42.674141  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:32:42.690709  848599 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 10:32:42.690729  848599 kic_runner.go:114] Args: [docker exec --privileged addons-109663 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 10:32:42.733555  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:32:42.751672  848599 machine.go:93] provisionDockerMachine start ...
	I1216 10:32:42.751782  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:42.769939  848599 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:42.770137  848599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1216 10:32:42.770149  848599 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 10:32:42.770885  848599 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52954->127.0.0.1:33139: read: connection reset by peer
	I1216 10:32:45.894395  848599 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-109663
	
	I1216 10:32:45.894432  848599 ubuntu.go:169] provisioning hostname "addons-109663"
	I1216 10:32:45.894492  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:45.910952  848599 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:45.911128  848599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1216 10:32:45.911140  848599 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-109663 && echo "addons-109663" | sudo tee /etc/hostname
	I1216 10:32:46.045111  848599 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-109663
	
	I1216 10:32:46.045193  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:46.061625  848599 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:46.061807  848599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1216 10:32:46.061823  848599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-109663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-109663/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-109663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 10:32:46.186853  848599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 10:32:46.186874  848599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20107-840384/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-840384/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-840384/.minikube}
	I1216 10:32:46.186896  848599 ubuntu.go:177] setting up certificates
	I1216 10:32:46.186907  848599 provision.go:84] configureAuth start
	I1216 10:32:46.186952  848599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-109663
	I1216 10:32:46.201955  848599 provision.go:143] copyHostCerts
	I1216 10:32:46.202017  848599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-840384/.minikube/ca.pem (1082 bytes)
	I1216 10:32:46.202141  848599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-840384/.minikube/cert.pem (1123 bytes)
	I1216 10:32:46.202206  848599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-840384/.minikube/key.pem (1675 bytes)
	I1216 10:32:46.202267  848599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-840384/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca-key.pem org=jenkins.addons-109663 san=[127.0.0.1 192.168.49.2 addons-109663 localhost minikube]
	I1216 10:32:46.342382  848599 provision.go:177] copyRemoteCerts
	I1216 10:32:46.342433  848599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 10:32:46.342468  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:46.358354  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:32:46.448060  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 10:32:46.469314  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 10:32:46.489705  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 10:32:46.509732  848599 provision.go:87] duration metric: took 322.814241ms to configureAuth
	I1216 10:32:46.509759  848599 ubuntu.go:193] setting minikube options for container-runtime
	I1216 10:32:46.509910  848599 config.go:182] Loaded profile config "addons-109663": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:32:46.510000  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:46.526470  848599 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:46.526646  848599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1216 10:32:46.526667  848599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 10:32:46.731259  848599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 10:32:46.731288  848599 machine.go:96] duration metric: took 3.97958665s to provisionDockerMachine
	I1216 10:32:46.731304  848599 client.go:171] duration metric: took 14.458503354s to LocalClient.Create
	I1216 10:32:46.731327  848599 start.go:167] duration metric: took 14.458580941s to libmachine.API.Create "addons-109663"
	I1216 10:32:46.731337  848599 start.go:293] postStartSetup for "addons-109663" (driver="docker")
	I1216 10:32:46.731348  848599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 10:32:46.731400  848599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 10:32:46.731446  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:46.748035  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:32:46.835559  848599 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 10:32:46.838341  848599 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 10:32:46.838368  848599 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1216 10:32:46.838385  848599 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1216 10:32:46.838394  848599 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1216 10:32:46.838411  848599 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-840384/.minikube/addons for local assets ...
	I1216 10:32:46.838464  848599 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-840384/.minikube/files for local assets ...
	I1216 10:32:46.838507  848599 start.go:296] duration metric: took 107.161933ms for postStartSetup
	I1216 10:32:46.838809  848599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-109663
	I1216 10:32:46.854233  848599 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/config.json ...
	I1216 10:32:46.854469  848599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 10:32:46.854512  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:46.869838  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:32:46.959575  848599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 10:32:46.963386  848599 start.go:128] duration metric: took 14.692586018s to createHost
	I1216 10:32:46.963416  848599 start.go:83] releasing machines lock for "addons-109663", held for 14.692749507s
	I1216 10:32:46.963496  848599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-109663
	I1216 10:32:46.978640  848599 ssh_runner.go:195] Run: cat /version.json
	I1216 10:32:46.978677  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:46.978701  848599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 10:32:46.978764  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:46.995635  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:32:46.996158  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:32:47.078650  848599 ssh_runner.go:195] Run: systemctl --version
	I1216 10:32:47.143743  848599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 10:32:47.278506  848599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 10:32:47.282415  848599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 10:32:47.299071  848599 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1216 10:32:47.299151  848599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 10:32:47.324690  848599 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1216 10:32:47.324713  848599 start.go:495] detecting cgroup driver to use...
	I1216 10:32:47.324748  848599 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 10:32:47.324785  848599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 10:32:47.338062  848599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 10:32:47.347179  848599 docker.go:217] disabling cri-docker service (if available) ...
	I1216 10:32:47.347216  848599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 10:32:47.358730  848599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 10:32:47.370504  848599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 10:32:47.449813  848599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 10:32:47.518777  848599 docker.go:233] disabling docker service ...
	I1216 10:32:47.518823  848599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 10:32:47.535753  848599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 10:32:47.545023  848599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 10:32:47.617016  848599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 10:32:47.691960  848599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 10:32:47.701141  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 10:32:47.714441  848599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 10:32:47.714485  848599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:47.722745  848599 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 10:32:47.722785  848599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:47.731282  848599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:47.739260  848599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:47.747136  848599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 10:32:47.754606  848599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:47.762450  848599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:47.775303  848599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:47.783168  848599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 10:32:47.789866  848599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 10:32:47.796667  848599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 10:32:47.867618  848599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 10:32:47.965258  848599 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 10:32:47.965321  848599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 10:32:47.968509  848599 start.go:563] Will wait 60s for crictl version
	I1216 10:32:47.968560  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:32:47.971345  848599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 10:32:48.003861  848599 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1216 10:32:48.003945  848599 ssh_runner.go:195] Run: crio --version
	I1216 10:32:48.037600  848599 ssh_runner.go:195] Run: crio --version
	I1216 10:32:48.070624  848599 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1216 10:32:48.071731  848599 cli_runner.go:164] Run: docker network inspect addons-109663 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 10:32:48.086790  848599 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 10:32:48.089949  848599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 10:32:48.099631  848599 kubeadm.go:883] updating cluster {Name:addons-109663 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-109663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 10:32:48.099753  848599 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 10:32:48.099811  848599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 10:32:48.162586  848599 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 10:32:48.162609  848599 crio.go:433] Images already preloaded, skipping extraction
	I1216 10:32:48.162661  848599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 10:32:48.193727  848599 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 10:32:48.193748  848599 cache_images.go:84] Images are preloaded, skipping loading
	I1216 10:32:48.193759  848599 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1216 10:32:48.193856  848599 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-109663 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-109663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 10:32:48.193930  848599 ssh_runner.go:195] Run: crio config
	I1216 10:32:48.233450  848599 cni.go:84] Creating CNI manager for ""
	I1216 10:32:48.233469  848599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 10:32:48.233479  848599 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 10:32:48.233499  848599 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-109663 NodeName:addons-109663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 10:32:48.233626  848599 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-109663"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 10:32:48.233678  848599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1216 10:32:48.241287  848599 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 10:32:48.241353  848599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 10:32:48.248690  848599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 10:32:48.263749  848599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 10:32:48.278768  848599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1216 10:32:48.293446  848599 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 10:32:48.296360  848599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 10:32:48.305456  848599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 10:32:48.385855  848599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 10:32:48.396926  848599 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663 for IP: 192.168.49.2
	I1216 10:32:48.396946  848599 certs.go:194] generating shared ca certs ...
	I1216 10:32:48.396972  848599 certs.go:226] acquiring lock for ca certs: {Name:mkc11fd68d423e1cca90bec28435e0a6c7ecf1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.397158  848599 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-840384/.minikube/ca.key
	I1216 10:32:48.466160  848599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-840384/.minikube/ca.crt ...
	I1216 10:32:48.466182  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/ca.crt: {Name:mk1859f6bdff9985876c6f50db5f2d1280c287c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.466320  848599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-840384/.minikube/ca.key ...
	I1216 10:32:48.466340  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/ca.key: {Name:mk92e534493378752c6e08cd41ae73570fe64ae3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.466434  848599 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-840384/.minikube/proxy-client-ca.key
	I1216 10:32:48.531063  848599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-840384/.minikube/proxy-client-ca.crt ...
	I1216 10:32:48.531083  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/proxy-client-ca.crt: {Name:mkbd2c16dce66b8bd8800e09edb15d99e74a3dee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.531214  848599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-840384/.minikube/proxy-client-ca.key ...
	I1216 10:32:48.531227  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/proxy-client-ca.key: {Name:mkbd79e00e7ff3c72871d6c44df9bbc55c8438ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.531295  848599 certs.go:256] generating profile certs ...
	I1216 10:32:48.531350  848599 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.key
	I1216 10:32:48.531370  848599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt with IP's: []
	I1216 10:32:48.582934  848599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt ...
	I1216 10:32:48.582951  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: {Name:mk7800253813d63a2b9feff6a9f93fbd096ed71c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.583055  848599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.key ...
	I1216 10:32:48.583065  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.key: {Name:mk68a33bf12fa88f4decce469c4693c84cfbbe9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.583137  848599 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.key.5a4e409c
	I1216 10:32:48.583153  848599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.crt.5a4e409c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 10:32:48.795073  848599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.crt.5a4e409c ...
	I1216 10:32:48.795097  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.crt.5a4e409c: {Name:mkb1a96aec38a507038981d80b8c62dd0085ece6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.795228  848599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.key.5a4e409c ...
	I1216 10:32:48.795240  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.key.5a4e409c: {Name:mk0eaba652e54fc0326310c214d334efd837fdd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.795306  848599 certs.go:381] copying /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.crt.5a4e409c -> /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.crt
	I1216 10:32:48.795380  848599 certs.go:385] copying /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.key.5a4e409c -> /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.key
	I1216 10:32:48.795425  848599 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.key
	I1216 10:32:48.795441  848599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.crt with IP's: []
	I1216 10:32:49.185378  848599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.crt ...
	I1216 10:32:49.185403  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.crt: {Name:mkfb8512dec95af5f7fe9be594be404ecbc3feb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:49.185541  848599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.key ...
	I1216 10:32:49.185553  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.key: {Name:mk1e40828877323680e6bc49b0f353b0f4a8d014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:49.185723  848599 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 10:32:49.185757  848599 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca.pem (1082 bytes)
	I1216 10:32:49.185782  848599 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/cert.pem (1123 bytes)
	I1216 10:32:49.185813  848599 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/key.pem (1675 bytes)
	I1216 10:32:49.186471  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 10:32:49.208462  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 10:32:49.229862  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 10:32:49.252897  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 10:32:49.272707  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 10:32:49.292707  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 10:32:49.312720  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 10:32:49.332990  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 10:32:49.352947  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 10:32:49.372714  848599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 10:32:49.387254  848599 ssh_runner.go:195] Run: openssl version
	I1216 10:32:49.391984  848599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 10:32:49.400361  848599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 10:32:49.403124  848599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1216 10:32:49.403185  848599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 10:32:49.409075  848599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 10:32:49.416721  848599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 10:32:49.419513  848599 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 10:32:49.419554  848599 kubeadm.go:392] StartCluster: {Name:addons-109663 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-109663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 10:32:49.419649  848599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 10:32:49.419715  848599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 10:32:49.451626  848599 cri.go:89] found id: ""
	I1216 10:32:49.451684  848599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 10:32:49.459035  848599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 10:32:49.466323  848599 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1216 10:32:49.466368  848599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 10:32:49.473764  848599 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 10:32:49.473786  848599 kubeadm.go:157] found existing configuration files:
	
	I1216 10:32:49.473827  848599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 10:32:49.481337  848599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 10:32:49.481398  848599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 10:32:49.488336  848599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 10:32:49.495564  848599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 10:32:49.495613  848599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 10:32:49.502569  848599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 10:32:49.509725  848599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 10:32:49.509771  848599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 10:32:49.517196  848599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 10:32:49.524832  848599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 10:32:49.524874  848599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 10:32:49.531758  848599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 10:32:49.565735  848599 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1216 10:32:49.565811  848599 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 10:32:49.580542  848599 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1216 10:32:49.580609  848599 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1216 10:32:49.580646  848599 kubeadm.go:310] OS: Linux
	I1216 10:32:49.580689  848599 kubeadm.go:310] CGROUPS_CPU: enabled
	I1216 10:32:49.580778  848599 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1216 10:32:49.580831  848599 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1216 10:32:49.580871  848599 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1216 10:32:49.580933  848599 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1216 10:32:49.581006  848599 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1216 10:32:49.581096  848599 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1216 10:32:49.581174  848599 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1216 10:32:49.581246  848599 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1216 10:32:49.629850  848599 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 10:32:49.630004  848599 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 10:32:49.630169  848599 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 10:32:49.636020  848599 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 10:32:49.639441  848599 out.go:235]   - Generating certificates and keys ...
	I1216 10:32:49.639556  848599 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 10:32:49.639619  848599 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 10:32:49.836390  848599 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 10:32:50.046656  848599 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1216 10:32:50.397346  848599 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1216 10:32:50.460502  848599 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1216 10:32:50.635424  848599 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1216 10:32:50.635586  848599 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-109663 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 10:32:50.820560  848599 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1216 10:32:50.820691  848599 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-109663 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 10:32:51.004936  848599 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 10:32:51.062758  848599 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 10:32:51.170931  848599 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1216 10:32:51.170996  848599 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 10:32:51.335077  848599 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 10:32:51.557386  848599 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 10:32:51.984782  848599 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 10:32:52.326144  848599 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 10:32:52.700266  848599 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 10:32:52.700739  848599 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 10:32:52.703004  848599 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 10:32:52.704967  848599 out.go:235]   - Booting up control plane ...
	I1216 10:32:52.705051  848599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 10:32:52.705133  848599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 10:32:52.705698  848599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 10:32:52.714141  848599 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 10:32:52.718979  848599 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 10:32:52.719037  848599 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 10:32:52.797628  848599 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 10:32:52.797772  848599 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 10:32:53.298364  848599 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.804305ms
	I1216 10:32:53.298473  848599 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 10:32:57.300009  848599 kubeadm.go:310] [api-check] The API server is healthy after 4.001662107s
	I1216 10:32:57.310037  848599 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 10:32:57.319228  848599 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 10:32:57.334100  848599 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 10:32:57.334362  848599 kubeadm.go:310] [mark-control-plane] Marking the node addons-109663 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 10:32:57.340771  848599 kubeadm.go:310] [bootstrap-token] Using token: 2h4i74.yidhy7fpg06tydg2
	I1216 10:32:57.341964  848599 out.go:235]   - Configuring RBAC rules ...
	I1216 10:32:57.342133  848599 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 10:32:57.345049  848599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 10:32:57.350019  848599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 10:32:57.352300  848599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 10:32:57.355264  848599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 10:32:57.357395  848599 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 10:32:57.705816  848599 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 10:32:58.121923  848599 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 10:32:58.704608  848599 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 10:32:58.705594  848599 kubeadm.go:310] 
	I1216 10:32:58.705701  848599 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 10:32:58.705720  848599 kubeadm.go:310] 
	I1216 10:32:58.705826  848599 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 10:32:58.705837  848599 kubeadm.go:310] 
	I1216 10:32:58.705873  848599 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 10:32:58.705959  848599 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 10:32:58.706029  848599 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 10:32:58.706038  848599 kubeadm.go:310] 
	I1216 10:32:58.706098  848599 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 10:32:58.706107  848599 kubeadm.go:310] 
	I1216 10:32:58.706168  848599 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 10:32:58.706183  848599 kubeadm.go:310] 
	I1216 10:32:58.706227  848599 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 10:32:58.706298  848599 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 10:32:58.706360  848599 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 10:32:58.706369  848599 kubeadm.go:310] 
	I1216 10:32:58.706437  848599 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 10:32:58.706507  848599 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 10:32:58.706514  848599 kubeadm.go:310] 
	I1216 10:32:58.706586  848599 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2h4i74.yidhy7fpg06tydg2 \
	I1216 10:32:58.706682  848599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e6a6471f4384e10723e2292fb8d114ab4ea25aa738d7f29c5187bb98e939b6b4 \
	I1216 10:32:58.706706  848599 kubeadm.go:310] 	--control-plane 
	I1216 10:32:58.706718  848599 kubeadm.go:310] 
	I1216 10:32:58.706818  848599 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 10:32:58.706831  848599 kubeadm.go:310] 
	I1216 10:32:58.706927  848599 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2h4i74.yidhy7fpg06tydg2 \
	I1216 10:32:58.707051  848599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e6a6471f4384e10723e2292fb8d114ab4ea25aa738d7f29c5187bb98e939b6b4 
	I1216 10:32:58.709379  848599 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1216 10:32:58.709495  848599 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 10:32:58.709516  848599 cni.go:84] Creating CNI manager for ""
	I1216 10:32:58.709524  848599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 10:32:58.711000  848599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1216 10:32:58.712127  848599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 10:32:58.715765  848599 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1216 10:32:58.715784  848599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 10:32:58.731953  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 10:32:58.917087  848599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 10:32:58.917200  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:58.917234  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-109663 minikube.k8s.io/updated_at=2024_12_16T10_32_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8 minikube.k8s.io/name=addons-109663 minikube.k8s.io/primary=true
	I1216 10:32:58.924437  848599 ops.go:34] apiserver oom_adj: -16
	I1216 10:32:58.985975  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:59.486347  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:59.986971  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:00.486306  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:00.986656  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:01.486381  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:01.986792  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:02.486214  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:02.986029  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:03.486249  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:03.986520  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:04.087749  848599 kubeadm.go:1113] duration metric: took 5.170647368s to wait for elevateKubeSystemPrivileges
	I1216 10:33:04.087800  848599 kubeadm.go:394] duration metric: took 14.668249445s to StartCluster
	I1216 10:33:04.087826  848599 settings.go:142] acquiring lock: {Name:mk06b7df26b8c35e37c6f668a6089af3b5005238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:33:04.087950  848599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20107-840384/kubeconfig
	I1216 10:33:04.088601  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/kubeconfig: {Name:mkf0f71705623f4096af1601d96997d88188e951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:33:04.088814  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 10:33:04.088833  848599 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 10:33:04.088909  848599 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 10:33:04.089044  848599 addons.go:69] Setting yakd=true in profile "addons-109663"
	I1216 10:33:04.089083  848599 addons.go:234] Setting addon yakd=true in "addons-109663"
	I1216 10:33:04.089098  848599 addons.go:69] Setting inspektor-gadget=true in profile "addons-109663"
	I1216 10:33:04.089104  848599 config.go:182] Loaded profile config "addons-109663": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:33:04.089121  848599 addons.go:234] Setting addon inspektor-gadget=true in "addons-109663"
	I1216 10:33:04.089133  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.089128  848599 addons.go:69] Setting default-storageclass=true in profile "addons-109663"
	I1216 10:33:04.089145  848599 addons.go:69] Setting cloud-spanner=true in profile "addons-109663"
	I1216 10:33:04.089171  848599 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-109663"
	I1216 10:33:04.089173  848599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-109663"
	I1216 10:33:04.089187  848599 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-109663"
	I1216 10:33:04.089194  848599 addons.go:69] Setting ingress=true in profile "addons-109663"
	I1216 10:33:04.089197  848599 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-109663"
	I1216 10:33:04.089213  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.089219  848599 addons.go:234] Setting addon ingress=true in "addons-109663"
	I1216 10:33:04.089236  848599 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-109663"
	I1216 10:33:04.089256  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.089270  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.089587  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.089704  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.089738  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.089751  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.089756  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.089986  848599 addons.go:69] Setting ingress-dns=true in profile "addons-109663"
	I1216 10:33:04.090008  848599 addons.go:234] Setting addon ingress-dns=true in "addons-109663"
	I1216 10:33:04.090019  848599 addons.go:69] Setting storage-provisioner=true in profile "addons-109663"
	I1216 10:33:04.090042  848599 addons.go:234] Setting addon storage-provisioner=true in "addons-109663"
	I1216 10:33:04.090056  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.090072  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.090107  848599 addons.go:69] Setting volcano=true in profile "addons-109663"
	I1216 10:33:04.090146  848599 addons.go:234] Setting addon volcano=true in "addons-109663"
	I1216 10:33:04.090170  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.090589  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.090631  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.090645  848599 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-109663"
	I1216 10:33:04.090662  848599 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-109663"
	I1216 10:33:04.090912  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.091232  848599 out.go:177] * Verifying Kubernetes components...
	I1216 10:33:04.089187  848599 addons.go:234] Setting addon cloud-spanner=true in "addons-109663"
	I1216 10:33:04.091364  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.089180  848599 addons.go:69] Setting gcp-auth=true in profile "addons-109663"
	I1216 10:33:04.091483  848599 mustload.go:65] Loading cluster: addons-109663
	I1216 10:33:04.091537  848599 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-109663"
	I1216 10:33:04.091591  848599 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-109663"
	I1216 10:33:04.091635  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.091761  848599 config.go:182] Loaded profile config "addons-109663": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:33:04.091927  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.092030  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.092131  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.095721  848599 addons.go:69] Setting metrics-server=true in profile "addons-109663"
	I1216 10:33:04.095747  848599 addons.go:234] Setting addon metrics-server=true in "addons-109663"
	I1216 10:33:04.095746  848599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 10:33:04.095777  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.095868  848599 addons.go:69] Setting registry=true in profile "addons-109663"
	I1216 10:33:04.091362  848599 addons.go:69] Setting volumesnapshots=true in profile "addons-109663"
	I1216 10:33:04.089169  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.095956  848599 addons.go:234] Setting addon registry=true in "addons-109663"
	I1216 10:33:04.095996  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.096414  848599 addons.go:234] Setting addon volumesnapshots=true in "addons-109663"
	I1216 10:33:04.096445  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.096452  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.096755  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.096949  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.097026  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.090631  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.130675  848599 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 10:33:04.132180  848599 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 10:33:04.132204  848599 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 10:33:04.132270  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.146511  848599 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1216 10:33:04.146594  848599 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 10:33:04.147723  848599 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 10:33:04.147747  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 10:33:04.147825  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.149145  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 10:33:04.150251  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 10:33:04.150478  848599 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-109663"
	I1216 10:33:04.150545  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.150630  848599 out.go:177]   - Using image docker.io/registry:2.8.3
	I1216 10:33:04.151011  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.152794  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 10:33:04.152887  848599 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1216 10:33:04.154036  848599 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1216 10:33:04.154055  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 10:33:04.154111  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.154856  848599 addons.go:234] Setting addon default-storageclass=true in "addons-109663"
	I1216 10:33:04.154903  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.155351  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.155923  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 10:33:04.156321  848599 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 10:33:04.156342  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 10:33:04.156387  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.158101  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 10:33:04.159206  848599 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 10:33:04.160412  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 10:33:04.161739  848599 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 10:33:04.162741  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 10:33:04.163984  848599 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1216 10:33:04.164963  848599 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1216 10:33:04.165572  848599 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 10:33:04.165607  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 10:33:04.165667  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.167879  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 10:33:04.170258  848599 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 10:33:04.170302  848599 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	W1216 10:33:04.170310  848599 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 10:33:04.170277  848599 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1216 10:33:04.170471  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.172519  848599 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1216 10:33:04.172537  848599 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1216 10:33:04.172605  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.175569  848599 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 10:33:04.175587  848599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 10:33:04.175660  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.210930  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 10:33:04.212499  848599 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 10:33:04.212530  848599 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 10:33:04.212615  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.235651  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.235699  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.236358  848599 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 10:33:04.236380  848599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 10:33:04.236442  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.236549  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.236651  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.240224  848599 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1216 10:33:04.240241  848599 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 10:33:04.240414  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.240883  848599 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1216 10:33:04.241768  848599 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 10:33:04.241792  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1216 10:33:04.241845  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.242344  848599 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 10:33:04.242364  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 10:33:04.242425  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.243294  848599 out.go:177]   - Using image docker.io/busybox:stable
	I1216 10:33:04.244353  848599 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 10:33:04.244382  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 10:33:04.244429  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.244885  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.245310  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.246878  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.246930  848599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 10:33:04.248030  848599 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 10:33:04.248051  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 10:33:04.248105  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.263238  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.271380  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.271660  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.272475  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	W1216 10:33:04.279597  848599 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 10:33:04.279627  848599 retry.go:31] will retry after 144.207623ms: ssh: handshake failed: EOF
	I1216 10:33:04.288206  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.296070  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 10:33:04.296458  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.296707  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	W1216 10:33:04.297011  848599 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 10:33:04.297031  848599 retry.go:31] will retry after 279.355591ms: ssh: handshake failed: EOF
	I1216 10:33:04.494587  848599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 10:33:04.589727  848599 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 10:33:04.589761  848599 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 10:33:04.673229  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 10:33:04.679498  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 10:33:04.774125  848599 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 10:33:04.774165  848599 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 10:33:04.777542  848599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 10:33:04.777581  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 10:33:04.783458  848599 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 10:33:04.783503  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 10:33:04.794089  848599 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 10:33:04.794123  848599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 10:33:04.794663  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 10:33:04.873178  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 10:33:04.879090  848599 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 10:33:04.879113  848599 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 10:33:04.881997  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 10:33:04.885458  848599 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 10:33:04.885483  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1216 10:33:04.892271  848599 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 10:33:04.892293  848599 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 10:33:04.895905  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 10:33:04.972830  848599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 10:33:04.972868  848599 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 10:33:04.973550  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 10:33:04.979372  848599 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 10:33:04.979401  848599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 10:33:05.074625  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 10:33:05.076930  848599 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 10:33:05.076956  848599 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 10:33:05.186319  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 10:33:05.188659  848599 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 10:33:05.188699  848599 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 10:33:05.191024  848599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 10:33:05.191062  848599 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 10:33:05.289209  848599 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 10:33:05.289291  848599 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 10:33:05.290410  848599 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 10:33:05.290476  848599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 10:33:05.372799  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 10:33:05.384174  848599 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 10:33:05.384201  848599 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 10:33:05.592500  848599 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 10:33:05.592549  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 10:33:05.773126  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 10:33:05.879400  848599 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 10:33:05.879511  848599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 10:33:05.888426  848599 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 10:33:05.888459  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 10:33:06.077724  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 10:33:06.178196  848599 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.882086183s)
	I1216 10:33:06.178247  848599 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1216 10:33:06.179598  848599 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.684975957s)
	I1216 10:33:06.180502  848599 node_ready.go:35] waiting up to 6m0s for node "addons-109663" to be "Ready" ...
	I1216 10:33:06.373531  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 10:33:06.389254  848599 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 10:33:06.389294  848599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 10:33:06.898854  848599 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-109663" context rescaled to 1 replicas
	I1216 10:33:06.985246  848599 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 10:33:06.985287  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 10:33:07.274010  848599 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 10:33:07.274100  848599 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 10:33:07.472478  848599 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 10:33:07.472515  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 10:33:07.687775  848599 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 10:33:07.687864  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 10:33:07.974316  848599 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 10:33:07.974399  848599 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 10:33:08.274679  848599 node_ready.go:53] node "addons-109663" has status "Ready":"False"
	I1216 10:33:08.287244  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 10:33:09.273222  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.599944933s)
	I1216 10:33:10.480914  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.801378523s)
	I1216 10:33:10.480957  848599 addons.go:475] Verifying addon ingress=true in "addons-109663"
	I1216 10:33:10.481001  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.686304837s)
	I1216 10:33:10.481109  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.607900487s)
	I1216 10:33:10.481203  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.599182204s)
	I1216 10:33:10.481438  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.585508598s)
	I1216 10:33:10.481459  848599 addons.go:475] Verifying addon registry=true in "addons-109663"
	I1216 10:33:10.481883  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.109009356s)
	I1216 10:33:10.481668  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.508095231s)
	I1216 10:33:10.481710  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.407059386s)
	I1216 10:33:10.481778  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.295416881s)
	I1216 10:33:10.481971  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.708752789s)
	I1216 10:33:10.481991  848599 addons.go:475] Verifying addon metrics-server=true in "addons-109663"
	I1216 10:33:10.482371  848599 out.go:177] * Verifying ingress addon...
	I1216 10:33:10.483212  848599 out.go:177] * Verifying registry addon...
	I1216 10:33:10.484880  848599 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 10:33:10.485888  848599 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 10:33:10.491352  848599 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 10:33:10.491375  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:10.492370  848599 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 10:33:10.492394  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 10:33:10.497556  848599 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1216 10:33:10.685455  848599 node_ready.go:53] node "addons-109663" has status "Ready":"False"
	I1216 10:33:10.989291  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:10.990123  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:11.410531  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.332755094s)
	W1216 10:33:11.410572  848599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 10:33:11.410594  848599 retry.go:31] will retry after 146.951232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 10:33:11.410618  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.037021705s)
	I1216 10:33:11.412521  848599 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-109663 service yakd-dashboard -n yakd-dashboard
	
	I1216 10:33:11.476073  848599 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 10:33:11.476207  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:11.488433  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:11.488982  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:11.496693  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:11.558632  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 10:33:11.682055  848599 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 10:33:11.773243  848599 addons.go:234] Setting addon gcp-auth=true in "addons-109663"
	I1216 10:33:11.773316  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:11.773724  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:11.794622  848599 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 10:33:11.794702  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:11.815157  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.527792528s)
	I1216 10:33:11.815205  848599 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-109663"
	I1216 10:33:11.816541  848599 out.go:177] * Verifying csi-hostpath-driver addon...
	I1216 10:33:11.817906  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:11.818272  848599 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 10:33:11.876736  848599 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 10:33:11.876761  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:11.988859  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:11.988992  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:12.321133  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:12.488213  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:12.488670  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:12.820552  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:12.988673  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:12.988814  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:13.183539  848599 node_ready.go:53] node "addons-109663" has status "Ready":"False"
	I1216 10:33:13.321278  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:13.488254  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:13.488540  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:13.821374  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:13.988232  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:13.988468  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:14.319445  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.760767877s)
	I1216 10:33:14.319526  848599 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.524872285s)
	I1216 10:33:14.321240  848599 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 10:33:14.321724  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:14.323613  848599 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 10:33:14.324764  848599 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 10:33:14.324784  848599 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 10:33:14.342010  848599 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 10:33:14.342031  848599 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 10:33:14.358225  848599 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 10:33:14.358242  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 10:33:14.373588  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 10:33:14.489146  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:14.489195  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:14.688535  848599 addons.go:475] Verifying addon gcp-auth=true in "addons-109663"
	I1216 10:33:14.689726  848599 out.go:177] * Verifying gcp-auth addon...
	I1216 10:33:14.691774  848599 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 10:33:14.693794  848599 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 10:33:14.693814  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:14.821373  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:14.988073  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:14.988401  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:15.183828  848599 node_ready.go:53] node "addons-109663" has status "Ready":"False"
	I1216 10:33:15.195110  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:15.321272  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:15.487994  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:15.488083  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:15.693975  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:15.821122  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:15.988405  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:15.989637  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:16.194046  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:16.321171  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:16.487823  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:16.488160  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:16.693727  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:16.821230  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:16.988099  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:16.988112  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:17.194466  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:17.320574  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:17.488270  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:17.488459  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:17.683559  848599 node_ready.go:53] node "addons-109663" has status "Ready":"False"
	I1216 10:33:17.694310  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:17.821698  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:17.988850  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:17.989287  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:18.194610  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:18.320610  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:18.488098  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:18.488541  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:18.694691  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:18.820791  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:18.988425  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:18.988881  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:19.194979  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:19.321154  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:19.488238  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:19.488592  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:19.694150  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:19.821326  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:19.988036  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:19.988321  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:20.182762  848599 node_ready.go:53] node "addons-109663" has status "Ready":"False"
	I1216 10:33:20.194584  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:20.322587  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:20.488030  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:20.488439  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:20.694318  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:20.821269  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:20.987972  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:20.988640  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:21.195178  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:21.321555  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:21.488262  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:21.488509  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:21.694792  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:21.821025  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:21.988166  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:21.988808  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:22.183545  848599 node_ready.go:49] node "addons-109663" has status "Ready":"True"
	I1216 10:33:22.183575  848599 node_ready.go:38] duration metric: took 16.003041871s for node "addons-109663" to be "Ready" ...
	I1216 10:33:22.183591  848599 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 10:33:22.194312  848599 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-nhj8x" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:22.197522  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:22.322250  848599 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 10:33:22.322336  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:22.490226  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:22.490635  848599 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 10:33:22.490660  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:22.696479  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:22.824283  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:22.991433  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:22.992361  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:23.195362  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:23.322483  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:23.489322  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:23.489683  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:23.695267  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:23.822532  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:23.989726  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:23.990631  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:24.195008  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:24.198627  848599 pod_ready.go:103] pod "amd-gpu-device-plugin-nhj8x" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:24.321616  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:24.489636  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:24.489791  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:24.695007  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:24.822843  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:24.988844  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:24.989122  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:25.195436  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:25.323096  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:25.488740  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:25.489022  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:25.694617  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:25.698720  848599 pod_ready.go:93] pod "amd-gpu-device-plugin-nhj8x" in "kube-system" namespace has status "Ready":"True"
	I1216 10:33:25.698739  848599 pod_ready.go:82] duration metric: took 3.504402288s for pod "amd-gpu-device-plugin-nhj8x" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.698748  848599 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ksv2k" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.702610  848599 pod_ready.go:93] pod "coredns-7c65d6cfc9-ksv2k" in "kube-system" namespace has status "Ready":"True"
	I1216 10:33:25.702627  848599 pod_ready.go:82] duration metric: took 3.872629ms for pod "coredns-7c65d6cfc9-ksv2k" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.702644  848599 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.706224  848599 pod_ready.go:93] pod "etcd-addons-109663" in "kube-system" namespace has status "Ready":"True"
	I1216 10:33:25.706253  848599 pod_ready.go:82] duration metric: took 3.589378ms for pod "etcd-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.706269  848599 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.709711  848599 pod_ready.go:93] pod "kube-apiserver-addons-109663" in "kube-system" namespace has status "Ready":"True"
	I1216 10:33:25.709728  848599 pod_ready.go:82] duration metric: took 3.450709ms for pod "kube-apiserver-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.709736  848599 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.713265  848599 pod_ready.go:93] pod "kube-controller-manager-addons-109663" in "kube-system" namespace has status "Ready":"True"
	I1216 10:33:25.713281  848599 pod_ready.go:82] duration metric: took 3.538224ms for pod "kube-controller-manager-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.713292  848599 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dw2js" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.822420  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:25.989042  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:25.989188  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:26.097285  848599 pod_ready.go:93] pod "kube-proxy-dw2js" in "kube-system" namespace has status "Ready":"True"
	I1216 10:33:26.097307  848599 pod_ready.go:82] duration metric: took 384.009465ms for pod "kube-proxy-dw2js" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:26.097317  848599 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:26.194937  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:26.322961  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:26.489581  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:26.489586  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:26.497405  848599 pod_ready.go:93] pod "kube-scheduler-addons-109663" in "kube-system" namespace has status "Ready":"True"
	I1216 10:33:26.497429  848599 pod_ready.go:82] duration metric: took 400.104712ms for pod "kube-scheduler-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:26.497442  848599 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:26.696384  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:26.823165  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:26.989901  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:26.990164  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:27.195958  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:27.322795  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:27.489994  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:27.490526  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:27.695991  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:27.822606  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:27.989159  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:27.989525  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:28.195007  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:28.323133  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:28.489488  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:28.489846  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:28.502566  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:28.695362  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:28.823297  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:28.989059  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:28.989279  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:29.194687  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:29.375493  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:29.489995  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:29.493278  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:29.695153  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:29.823826  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:29.991148  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:29.991714  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:30.195680  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:30.322033  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:30.489421  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:30.489469  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:30.694988  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:30.876318  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:30.989582  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:30.990008  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:31.003003  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:31.195966  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:31.323096  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:31.489187  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:31.489815  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:31.695583  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:31.822994  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:31.988983  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:31.989237  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:32.195592  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:32.323277  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:32.488903  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:32.489392  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:32.696506  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:32.823243  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:32.988885  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:32.989148  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:33.196341  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:33.322674  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:33.488634  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:33.488667  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:33.502086  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:33.693815  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:33.821559  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:33.988429  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:33.988769  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:34.194152  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:34.325349  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:34.488396  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:34.488714  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:34.694574  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:34.876109  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:34.990189  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:34.990724  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:35.195194  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:35.375444  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:35.492191  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:35.493688  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:35.502341  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:35.695132  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:35.876051  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:35.990210  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:35.993798  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:36.195051  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:36.322207  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:36.489307  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:36.489410  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:36.696303  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:36.823455  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:36.989688  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:36.989711  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:37.195654  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:37.323150  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:37.489519  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:37.489577  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:37.502620  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:37.695819  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:37.823526  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:37.989517  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:37.989692  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:38.195490  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:38.323913  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:38.489512  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:38.489639  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:38.695009  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:38.823240  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:38.989637  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:38.989966  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:39.195863  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:39.322922  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:39.489532  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:39.489839  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:39.502745  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:39.695567  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:39.822774  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:39.989533  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:39.989845  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:40.195641  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:40.376146  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:40.490514  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:40.490606  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:40.696062  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:40.875447  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:40.989883  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:40.990067  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:41.196047  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:41.324554  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:41.489061  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:41.489773  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:41.502805  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:41.695780  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:41.823285  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:41.989357  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:41.989524  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:42.195514  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:42.323286  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:42.489536  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:42.489650  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:42.695923  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:42.823083  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:42.989403  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:42.989743  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:43.195428  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:43.374868  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:43.489367  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:43.489663  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:43.503985  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:43.696012  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:43.822523  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:43.989041  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:43.989507  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:44.196437  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:44.322827  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:44.489009  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:44.489985  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:44.695640  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:44.823416  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:44.989302  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:44.989712  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:45.194931  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:45.322806  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:45.489197  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:45.489242  848599 kapi.go:107] duration metric: took 35.003353773s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 10:33:45.694246  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:45.822395  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:45.988886  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:46.003221  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:46.194491  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:46.322813  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:46.488662  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:46.694552  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:46.822549  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:46.989251  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:47.195148  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:47.322802  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:47.490015  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:47.694873  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:47.823284  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:47.989150  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:48.084009  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:48.195599  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:48.376370  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:48.489896  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:48.696319  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:48.876501  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:48.992663  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:49.195582  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:49.375953  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:49.490297  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:49.695610  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:49.823687  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:49.989632  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:50.195374  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:50.323555  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:50.489368  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:50.503787  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:50.695450  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:50.823129  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:50.988637  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:51.195270  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:51.323080  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:51.508782  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:51.695217  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:51.823494  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:51.989770  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:52.195617  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:52.321818  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:52.489915  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:52.696638  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:52.826492  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:52.988736  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:53.003486  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:53.195901  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:53.323388  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:53.490222  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:53.695771  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:53.824714  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:53.989114  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:54.195596  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:54.323421  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:54.488781  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:54.694998  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:54.822746  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:54.989975  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:55.195250  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:55.323393  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:55.489828  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:55.502636  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:55.695371  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:55.823548  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:55.988753  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:56.195311  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:56.322360  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:56.488475  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:56.695360  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:56.823160  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:56.988763  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:57.195557  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:57.323228  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:57.488760  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:57.502986  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:57.695786  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:57.822469  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:57.989745  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:58.194764  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:58.322146  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:58.489249  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:58.695078  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:58.822330  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:58.988720  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:59.195576  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:59.323259  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:59.490215  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:59.696072  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:59.823107  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:00.010677  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:00.012028  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:00.194618  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:34:00.322187  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:00.488296  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:00.695347  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:34:00.822633  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:00.989168  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:01.194868  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:34:01.322119  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:01.489010  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:01.695366  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:34:01.823165  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:01.988799  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:02.194750  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:34:02.322202  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:02.488979  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:02.502416  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:02.695830  848599 kapi.go:107] duration metric: took 48.004052123s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 10:34:02.697415  848599 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-109663 cluster.
	I1216 10:34:02.698542  848599 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 10:34:02.699693  848599 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 10:34:02.874592  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:02.989784  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:03.322719  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:03.489551  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:03.823450  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:03.989440  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:04.324087  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:04.489445  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:04.502644  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:04.822353  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:04.989069  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:05.323880  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:05.512296  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:05.875523  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:05.990166  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:06.397131  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:06.489855  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:06.574453  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:06.877071  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:06.989900  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:07.376293  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:07.494377  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:07.878721  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:07.988468  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:08.323271  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:08.489196  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:08.823385  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:08.988831  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:09.003062  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:09.323192  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:09.489553  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:09.822684  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:09.989937  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:10.323395  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:10.489825  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:10.823045  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:10.988842  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:11.003275  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:11.322851  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:11.489655  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:11.823870  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:11.989881  848599 kapi.go:107] duration metric: took 1m1.505001565s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 10:34:12.322605  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:12.876290  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:13.003959  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:13.324089  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:13.823333  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:14.322846  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:14.822531  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:15.323702  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:15.503382  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:15.822069  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:16.322918  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:16.822182  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:17.322594  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:17.823222  848599 kapi.go:107] duration metric: took 1m6.004947421s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 10:34:17.824726  848599 out.go:177] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, cloud-spanner, inspektor-gadget, ingress-dns, nvidia-device-plugin, metrics-server, default-storageclass, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1216 10:34:17.825819  848599 addons.go:510] duration metric: took 1m13.736912479s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin cloud-spanner inspektor-gadget ingress-dns nvidia-device-plugin metrics-server default-storageclass yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1216 10:34:18.003373  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:20.502137  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:22.502669  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:24.503206  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:26.503344  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:28.620772  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:31.003959  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:33.002781  848599 pod_ready.go:93] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"True"
	I1216 10:34:33.002804  848599 pod_ready.go:82] duration metric: took 1m6.505353818s for pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace to be "Ready" ...
	I1216 10:34:33.002816  848599 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-k4znm" in "kube-system" namespace to be "Ready" ...
	I1216 10:34:33.007179  848599 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-k4znm" in "kube-system" namespace has status "Ready":"True"
	I1216 10:34:33.007200  848599 pod_ready.go:82] duration metric: took 4.376449ms for pod "nvidia-device-plugin-daemonset-k4znm" in "kube-system" namespace to be "Ready" ...
	I1216 10:34:33.007222  848599 pod_ready.go:39] duration metric: took 1m10.823613152s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 10:34:33.007246  848599 api_server.go:52] waiting for apiserver process to appear ...
	I1216 10:34:33.007317  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 10:34:33.007419  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 10:34:33.041826  848599 cri.go:89] found id: "c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804"
	I1216 10:34:33.041843  848599 cri.go:89] found id: ""
	I1216 10:34:33.041852  848599 logs.go:282] 1 containers: [c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804]
	I1216 10:34:33.041893  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:33.045200  848599 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 10:34:33.045244  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 10:34:33.078373  848599 cri.go:89] found id: "93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6"
	I1216 10:34:33.078390  848599 cri.go:89] found id: ""
	I1216 10:34:33.078398  848599 logs.go:282] 1 containers: [93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6]
	I1216 10:34:33.078432  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:33.081776  848599 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 10:34:33.081822  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 10:34:33.113665  848599 cri.go:89] found id: "d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190"
	I1216 10:34:33.113684  848599 cri.go:89] found id: ""
	I1216 10:34:33.113692  848599 logs.go:282] 1 containers: [d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190]
	I1216 10:34:33.113726  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:33.116711  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 10:34:33.116773  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 10:34:33.149097  848599 cri.go:89] found id: "c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53"
	I1216 10:34:33.149116  848599 cri.go:89] found id: ""
	I1216 10:34:33.149129  848599 logs.go:282] 1 containers: [c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53]
	I1216 10:34:33.149163  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:33.152109  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 10:34:33.152155  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 10:34:33.182865  848599 cri.go:89] found id: "c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc"
	I1216 10:34:33.182884  848599 cri.go:89] found id: ""
	I1216 10:34:33.182894  848599 logs.go:282] 1 containers: [c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc]
	I1216 10:34:33.182927  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:33.185812  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 10:34:33.185877  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 10:34:33.217210  848599 cri.go:89] found id: "bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50"
	I1216 10:34:33.217232  848599 cri.go:89] found id: ""
	I1216 10:34:33.217244  848599 logs.go:282] 1 containers: [bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50]
	I1216 10:34:33.217278  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:33.220246  848599 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 10:34:33.220314  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 10:34:33.252320  848599 cri.go:89] found id: "9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9"
	I1216 10:34:33.252355  848599 cri.go:89] found id: ""
	I1216 10:34:33.252367  848599 logs.go:282] 1 containers: [9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9]
	I1216 10:34:33.252411  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:33.255333  848599 logs.go:123] Gathering logs for kubelet ...
	I1216 10:34:33.255361  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 10:34:33.334752  848599 logs.go:123] Gathering logs for kube-apiserver [c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804] ...
	I1216 10:34:33.334782  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804"
	I1216 10:34:33.377060  848599 logs.go:123] Gathering logs for kube-proxy [c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc] ...
	I1216 10:34:33.377081  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc"
	I1216 10:34:33.409466  848599 logs.go:123] Gathering logs for kube-controller-manager [bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50] ...
	I1216 10:34:33.409490  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50"
	I1216 10:34:33.463839  848599 logs.go:123] Gathering logs for CRI-O ...
	I1216 10:34:33.463865  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 10:34:33.536966  848599 logs.go:123] Gathering logs for container status ...
	I1216 10:34:33.536995  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 10:34:33.576693  848599 logs.go:123] Gathering logs for dmesg ...
	I1216 10:34:33.576718  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 10:34:33.602210  848599 logs.go:123] Gathering logs for describe nodes ...
	I1216 10:34:33.602235  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 10:34:33.698067  848599 logs.go:123] Gathering logs for etcd [93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6] ...
	I1216 10:34:33.698107  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6"
	I1216 10:34:33.748695  848599 logs.go:123] Gathering logs for coredns [d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190] ...
	I1216 10:34:33.748723  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190"
	I1216 10:34:33.802810  848599 logs.go:123] Gathering logs for kube-scheduler [c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53] ...
	I1216 10:34:33.802846  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53"
	I1216 10:34:33.841795  848599 logs.go:123] Gathering logs for kindnet [9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9] ...
	I1216 10:34:33.841823  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9"
	I1216 10:34:36.373886  848599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 10:34:36.387690  848599 api_server.go:72] duration metric: took 1m32.298817651s to wait for apiserver process to appear ...
	I1216 10:34:36.387721  848599 api_server.go:88] waiting for apiserver healthz status ...
	I1216 10:34:36.387772  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 10:34:36.387841  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 10:34:36.421031  848599 cri.go:89] found id: "c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804"
	I1216 10:34:36.421062  848599 cri.go:89] found id: ""
	I1216 10:34:36.421077  848599 logs.go:282] 1 containers: [c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804]
	I1216 10:34:36.421138  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:36.424373  848599 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 10:34:36.424428  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 10:34:36.456413  848599 cri.go:89] found id: "93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6"
	I1216 10:34:36.456434  848599 cri.go:89] found id: ""
	I1216 10:34:36.456445  848599 logs.go:282] 1 containers: [93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6]
	I1216 10:34:36.456495  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:36.459492  848599 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 10:34:36.459554  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 10:34:36.491350  848599 cri.go:89] found id: "d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190"
	I1216 10:34:36.491370  848599 cri.go:89] found id: ""
	I1216 10:34:36.491379  848599 logs.go:282] 1 containers: [d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190]
	I1216 10:34:36.491420  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:36.494403  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 10:34:36.494454  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 10:34:36.526671  848599 cri.go:89] found id: "c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53"
	I1216 10:34:36.526688  848599 cri.go:89] found id: ""
	I1216 10:34:36.526695  848599 logs.go:282] 1 containers: [c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53]
	I1216 10:34:36.526735  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:36.529636  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 10:34:36.529688  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 10:34:36.563198  848599 cri.go:89] found id: "c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc"
	I1216 10:34:36.563217  848599 cri.go:89] found id: ""
	I1216 10:34:36.563227  848599 logs.go:282] 1 containers: [c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc]
	I1216 10:34:36.563283  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:36.566202  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 10:34:36.566256  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 10:34:36.598334  848599 cri.go:89] found id: "bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50"
	I1216 10:34:36.598353  848599 cri.go:89] found id: ""
	I1216 10:34:36.598361  848599 logs.go:282] 1 containers: [bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50]
	I1216 10:34:36.598413  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:36.601335  848599 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 10:34:36.601404  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 10:34:36.634180  848599 cri.go:89] found id: "9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9"
	I1216 10:34:36.634195  848599 cri.go:89] found id: ""
	I1216 10:34:36.634203  848599 logs.go:282] 1 containers: [9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9]
	I1216 10:34:36.634250  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:36.637167  848599 logs.go:123] Gathering logs for kube-apiserver [c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804] ...
	I1216 10:34:36.637191  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804"
	I1216 10:34:36.680397  848599 logs.go:123] Gathering logs for kube-scheduler [c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53] ...
	I1216 10:34:36.680421  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53"
	I1216 10:34:36.717036  848599 logs.go:123] Gathering logs for kube-controller-manager [bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50] ...
	I1216 10:34:36.717062  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50"
	I1216 10:34:36.771623  848599 logs.go:123] Gathering logs for CRI-O ...
	I1216 10:34:36.771648  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 10:34:36.848400  848599 logs.go:123] Gathering logs for container status ...
	I1216 10:34:36.848426  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 10:34:36.890499  848599 logs.go:123] Gathering logs for describe nodes ...
	I1216 10:34:36.890524  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 10:34:36.984658  848599 logs.go:123] Gathering logs for dmesg ...
	I1216 10:34:36.984683  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 10:34:37.010767  848599 logs.go:123] Gathering logs for etcd [93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6] ...
	I1216 10:34:37.010795  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6"
	I1216 10:34:37.058997  848599 logs.go:123] Gathering logs for coredns [d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190] ...
	I1216 10:34:37.059021  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190"
	I1216 10:34:37.109478  848599 logs.go:123] Gathering logs for kube-proxy [c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc] ...
	I1216 10:34:37.109513  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc"
	I1216 10:34:37.141261  848599 logs.go:123] Gathering logs for kindnet [9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9] ...
	I1216 10:34:37.141284  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9"
	I1216 10:34:37.173779  848599 logs.go:123] Gathering logs for kubelet ...
	I1216 10:34:37.173857  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 10:34:39.755150  848599 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 10:34:39.758789  848599 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 10:34:39.759732  848599 api_server.go:141] control plane version: v1.31.2
	I1216 10:34:39.759759  848599 api_server.go:131] duration metric: took 3.372030509s to wait for apiserver health ...
	I1216 10:34:39.759767  848599 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 10:34:39.759796  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 10:34:39.759850  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 10:34:39.795031  848599 cri.go:89] found id: "c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804"
	I1216 10:34:39.795051  848599 cri.go:89] found id: ""
	I1216 10:34:39.795060  848599 logs.go:282] 1 containers: [c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804]
	I1216 10:34:39.795104  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:39.798358  848599 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 10:34:39.798435  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 10:34:39.831890  848599 cri.go:89] found id: "93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6"
	I1216 10:34:39.831906  848599 cri.go:89] found id: ""
	I1216 10:34:39.831913  848599 logs.go:282] 1 containers: [93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6]
	I1216 10:34:39.831951  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:39.834968  848599 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 10:34:39.835037  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 10:34:39.866579  848599 cri.go:89] found id: "d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190"
	I1216 10:34:39.866602  848599 cri.go:89] found id: ""
	I1216 10:34:39.866613  848599 logs.go:282] 1 containers: [d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190]
	I1216 10:34:39.866647  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:39.869695  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 10:34:39.869763  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 10:34:39.901933  848599 cri.go:89] found id: "c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53"
	I1216 10:34:39.901954  848599 cri.go:89] found id: ""
	I1216 10:34:39.901966  848599 logs.go:282] 1 containers: [c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53]
	I1216 10:34:39.902014  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:39.905112  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 10:34:39.905174  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 10:34:39.938572  848599 cri.go:89] found id: "c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc"
	I1216 10:34:39.938590  848599 cri.go:89] found id: ""
	I1216 10:34:39.938598  848599 logs.go:282] 1 containers: [c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc]
	I1216 10:34:39.938648  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:39.941675  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 10:34:39.941738  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 10:34:39.974011  848599 cri.go:89] found id: "bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50"
	I1216 10:34:39.974033  848599 cri.go:89] found id: ""
	I1216 10:34:39.974043  848599 logs.go:282] 1 containers: [bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50]
	I1216 10:34:39.974092  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:39.977679  848599 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 10:34:39.977725  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 10:34:40.011518  848599 cri.go:89] found id: "9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9"
	I1216 10:34:40.011539  848599 cri.go:89] found id: ""
	I1216 10:34:40.011547  848599 logs.go:282] 1 containers: [9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9]
	I1216 10:34:40.011598  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:40.014781  848599 logs.go:123] Gathering logs for kubelet ...
	I1216 10:34:40.014805  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 10:34:40.093024  848599 logs.go:123] Gathering logs for kube-proxy [c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc] ...
	I1216 10:34:40.093048  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc"
	I1216 10:34:40.125022  848599 logs.go:123] Gathering logs for container status ...
	I1216 10:34:40.125045  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 10:34:40.167202  848599 logs.go:123] Gathering logs for etcd [93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6] ...
	I1216 10:34:40.167230  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6"
	I1216 10:34:40.217550  848599 logs.go:123] Gathering logs for coredns [d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190] ...
	I1216 10:34:40.217579  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190"
	I1216 10:34:40.271787  848599 logs.go:123] Gathering logs for kube-scheduler [c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53] ...
	I1216 10:34:40.271829  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53"
	I1216 10:34:40.308809  848599 logs.go:123] Gathering logs for kube-controller-manager [bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50] ...
	I1216 10:34:40.308835  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50"
	I1216 10:34:40.363908  848599 logs.go:123] Gathering logs for kindnet [9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9] ...
	I1216 10:34:40.363934  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9"
	I1216 10:34:40.396438  848599 logs.go:123] Gathering logs for dmesg ...
	I1216 10:34:40.396463  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 10:34:40.424000  848599 logs.go:123] Gathering logs for describe nodes ...
	I1216 10:34:40.424023  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 10:34:40.521844  848599 logs.go:123] Gathering logs for kube-apiserver [c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804] ...
	I1216 10:34:40.521880  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804"
	I1216 10:34:40.565349  848599 logs.go:123] Gathering logs for CRI-O ...
	I1216 10:34:40.565379  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 10:34:43.152582  848599 system_pods.go:59] 19 kube-system pods found
	I1216 10:34:43.152628  848599 system_pods.go:61] "amd-gpu-device-plugin-nhj8x" [483a0808-3e15-4de2-b48a-ecfa43394c55] Running
	I1216 10:34:43.152639  848599 system_pods.go:61] "coredns-7c65d6cfc9-ksv2k" [b31289fc-3ff8-4af0-a5d2-a88dace5589c] Running
	I1216 10:34:43.152645  848599 system_pods.go:61] "csi-hostpath-attacher-0" [9089b466-c717-4755-bf51-2740aecfaeb6] Running
	I1216 10:34:43.152650  848599 system_pods.go:61] "csi-hostpath-resizer-0" [963124d9-8e43-4fb9-a011-05c542d2fb50] Running
	I1216 10:34:43.152655  848599 system_pods.go:61] "csi-hostpathplugin-7826x" [856ef16b-5b68-404c-8df4-558dc73fe76b] Running
	I1216 10:34:43.152660  848599 system_pods.go:61] "etcd-addons-109663" [9789d971-2bea-46bf-872e-e096afce5cb0] Running
	I1216 10:34:43.152666  848599 system_pods.go:61] "kindnet-sn2ww" [1c8f1cfd-5f82-439c-b6f7-b654f855b517] Running
	I1216 10:34:43.152672  848599 system_pods.go:61] "kube-apiserver-addons-109663" [4e04829b-d42e-4de8-be6a-0ec8196b7c28] Running
	I1216 10:34:43.152678  848599 system_pods.go:61] "kube-controller-manager-addons-109663" [c5a39a90-0604-42e4-bdc4-d4b9ab6f6df5] Running
	I1216 10:34:43.152687  848599 system_pods.go:61] "kube-ingress-dns-minikube" [a0ba89f2-e8b1-498e-ab03-dd8a5e50c176] Running
	I1216 10:34:43.152694  848599 system_pods.go:61] "kube-proxy-dw2js" [82afbc0e-6ed6-4a7a-8721-d77176570525] Running
	I1216 10:34:43.152703  848599 system_pods.go:61] "kube-scheduler-addons-109663" [018079f5-5c1a-4a2c-8845-8adfc665ce77] Running
	I1216 10:34:43.152709  848599 system_pods.go:61] "metrics-server-84c5f94fbc-z8rzz" [0c4013ee-0e9e-4bf6-aff8-752bb76b1c0c] Running
	I1216 10:34:43.152719  848599 system_pods.go:61] "nvidia-device-plugin-daemonset-k4znm" [94be2280-9ef7-49a1-aed5-ae48c7b50056] Running
	I1216 10:34:43.152725  848599 system_pods.go:61] "registry-5cc95cd69-rkb22" [9148bfd2-bdfd-42f6-9b6e-f2cb29de4e1e] Running
	I1216 10:34:43.152731  848599 system_pods.go:61] "registry-proxy-w5gg9" [5d79e061-c009-4296-adaf-94ec1a94ed36] Running
	I1216 10:34:43.152737  848599 system_pods.go:61] "snapshot-controller-56fcc65765-8skj8" [29ea6b74-8543-4d6d-a9f0-8476aaef7f19] Running
	I1216 10:34:43.152744  848599 system_pods.go:61] "snapshot-controller-56fcc65765-rb9fx" [62bd9cad-e4a7-474c-9ce0-bb38412ded35] Running
	I1216 10:34:43.152752  848599 system_pods.go:61] "storage-provisioner" [f6eecac1-47ca-4d5e-8014-bbb9f35f7213] Running
	I1216 10:34:43.152764  848599 system_pods.go:74] duration metric: took 3.392988839s to wait for pod list to return data ...
	I1216 10:34:43.152779  848599 default_sa.go:34] waiting for default service account to be created ...
	I1216 10:34:43.154908  848599 default_sa.go:45] found service account: "default"
	I1216 10:34:43.154931  848599 default_sa.go:55] duration metric: took 2.143478ms for default service account to be created ...
	I1216 10:34:43.154942  848599 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 10:34:43.164127  848599 system_pods.go:86] 19 kube-system pods found
	I1216 10:34:43.164152  848599 system_pods.go:89] "amd-gpu-device-plugin-nhj8x" [483a0808-3e15-4de2-b48a-ecfa43394c55] Running
	I1216 10:34:43.164158  848599 system_pods.go:89] "coredns-7c65d6cfc9-ksv2k" [b31289fc-3ff8-4af0-a5d2-a88dace5589c] Running
	I1216 10:34:43.164162  848599 system_pods.go:89] "csi-hostpath-attacher-0" [9089b466-c717-4755-bf51-2740aecfaeb6] Running
	I1216 10:34:43.164166  848599 system_pods.go:89] "csi-hostpath-resizer-0" [963124d9-8e43-4fb9-a011-05c542d2fb50] Running
	I1216 10:34:43.164170  848599 system_pods.go:89] "csi-hostpathplugin-7826x" [856ef16b-5b68-404c-8df4-558dc73fe76b] Running
	I1216 10:34:43.164173  848599 system_pods.go:89] "etcd-addons-109663" [9789d971-2bea-46bf-872e-e096afce5cb0] Running
	I1216 10:34:43.164176  848599 system_pods.go:89] "kindnet-sn2ww" [1c8f1cfd-5f82-439c-b6f7-b654f855b517] Running
	I1216 10:34:43.164180  848599 system_pods.go:89] "kube-apiserver-addons-109663" [4e04829b-d42e-4de8-be6a-0ec8196b7c28] Running
	I1216 10:34:43.164184  848599 system_pods.go:89] "kube-controller-manager-addons-109663" [c5a39a90-0604-42e4-bdc4-d4b9ab6f6df5] Running
	I1216 10:34:43.164189  848599 system_pods.go:89] "kube-ingress-dns-minikube" [a0ba89f2-e8b1-498e-ab03-dd8a5e50c176] Running
	I1216 10:34:43.164195  848599 system_pods.go:89] "kube-proxy-dw2js" [82afbc0e-6ed6-4a7a-8721-d77176570525] Running
	I1216 10:34:43.164199  848599 system_pods.go:89] "kube-scheduler-addons-109663" [018079f5-5c1a-4a2c-8845-8adfc665ce77] Running
	I1216 10:34:43.164203  848599 system_pods.go:89] "metrics-server-84c5f94fbc-z8rzz" [0c4013ee-0e9e-4bf6-aff8-752bb76b1c0c] Running
	I1216 10:34:43.164208  848599 system_pods.go:89] "nvidia-device-plugin-daemonset-k4znm" [94be2280-9ef7-49a1-aed5-ae48c7b50056] Running
	I1216 10:34:43.164220  848599 system_pods.go:89] "registry-5cc95cd69-rkb22" [9148bfd2-bdfd-42f6-9b6e-f2cb29de4e1e] Running
	I1216 10:34:43.164223  848599 system_pods.go:89] "registry-proxy-w5gg9" [5d79e061-c009-4296-adaf-94ec1a94ed36] Running
	I1216 10:34:43.164228  848599 system_pods.go:89] "snapshot-controller-56fcc65765-8skj8" [29ea6b74-8543-4d6d-a9f0-8476aaef7f19] Running
	I1216 10:34:43.164234  848599 system_pods.go:89] "snapshot-controller-56fcc65765-rb9fx" [62bd9cad-e4a7-474c-9ce0-bb38412ded35] Running
	I1216 10:34:43.164237  848599 system_pods.go:89] "storage-provisioner" [f6eecac1-47ca-4d5e-8014-bbb9f35f7213] Running
	I1216 10:34:43.164244  848599 system_pods.go:126] duration metric: took 9.295549ms to wait for k8s-apps to be running ...
	I1216 10:34:43.164253  848599 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 10:34:43.164295  848599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 10:34:43.175918  848599 system_svc.go:56] duration metric: took 11.65853ms WaitForService to wait for kubelet
	I1216 10:34:43.175940  848599 kubeadm.go:582] duration metric: took 1m39.087076667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 10:34:43.175962  848599 node_conditions.go:102] verifying NodePressure condition ...
	I1216 10:34:43.178532  848599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 10:34:43.178559  848599 node_conditions.go:123] node cpu capacity is 8
	I1216 10:34:43.178575  848599 node_conditions.go:105] duration metric: took 2.605732ms to run NodePressure ...
	I1216 10:34:43.178594  848599 start.go:241] waiting for startup goroutines ...
	I1216 10:34:43.178609  848599 start.go:246] waiting for cluster config update ...
	I1216 10:34:43.178631  848599 start.go:255] writing updated cluster config ...
	I1216 10:34:43.178953  848599 ssh_runner.go:195] Run: rm -f paused
	I1216 10:34:43.230691  848599 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1216 10:34:43.232683  848599 out.go:177] * Done! kubectl is now configured to use "addons-109663" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 10:36:58 addons-109663 crio[1041]: time="2024-12-16 10:36:58.273779077Z" level=info msg="Removed pod sandbox: 8bc46f18bedb31da572fc9f1c74f40e93ff832db94b07b6e643b6bacf285ad88" id=7ae85b51-0525-4420-a7e3-a4418fc23787 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 10:37:42 addons-109663 crio[1041]: time="2024-12-16 10:37:42.696379738Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-br7qj/POD" id=03f4d38f-128f-4049-8223-559b014b676e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 10:37:42 addons-109663 crio[1041]: time="2024-12-16 10:37:42.696446086Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 16 10:37:42 addons-109663 crio[1041]: time="2024-12-16 10:37:42.714124632Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-br7qj Namespace:default ID:fa2caf81dd3dd6b79e8274a68d3ea74e6c9e0dacfe5922ddb2764dc2c0eb52b7 UID:6046c7ab-0532-4ad2-907c-cbe45f15d836 NetNS:/var/run/netns/0429debe-237e-4f9e-8adb-2edda12918e1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 16 10:37:42 addons-109663 crio[1041]: time="2024-12-16 10:37:42.714158222Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-br7qj to CNI network \"kindnet\" (type=ptp)"
	Dec 16 10:37:42 addons-109663 crio[1041]: time="2024-12-16 10:37:42.725472994Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-br7qj Namespace:default ID:fa2caf81dd3dd6b79e8274a68d3ea74e6c9e0dacfe5922ddb2764dc2c0eb52b7 UID:6046c7ab-0532-4ad2-907c-cbe45f15d836 NetNS:/var/run/netns/0429debe-237e-4f9e-8adb-2edda12918e1 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 16 10:37:42 addons-109663 crio[1041]: time="2024-12-16 10:37:42.725657315Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-br7qj for CNI network kindnet (type=ptp)"
	Dec 16 10:37:42 addons-109663 crio[1041]: time="2024-12-16 10:37:42.729386244Z" level=info msg="Ran pod sandbox fa2caf81dd3dd6b79e8274a68d3ea74e6c9e0dacfe5922ddb2764dc2c0eb52b7 with infra container: default/hello-world-app-55bf9c44b4-br7qj/POD" id=03f4d38f-128f-4049-8223-559b014b676e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 10:37:42 addons-109663 crio[1041]: time="2024-12-16 10:37:42.730926434Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9aa4f830-863e-48de-975f-8fe81a3b75af name=/runtime.v1.ImageService/ImageStatus
	Dec 16 10:37:42 addons-109663 crio[1041]: time="2024-12-16 10:37:42.731209976Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=9aa4f830-863e-48de-975f-8fe81a3b75af name=/runtime.v1.ImageService/ImageStatus
	Dec 16 10:37:42 addons-109663 crio[1041]: time="2024-12-16 10:37:42.772415488Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=71a5bd68-1af8-40f8-a060-2d05e2acdaaa name=/runtime.v1.ImageService/PullImage
	Dec 16 10:37:42 addons-109663 crio[1041]: time="2024-12-16 10:37:42.776194669Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 16 10:37:42 addons-109663 crio[1041]: time="2024-12-16 10:37:42.921108659Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 16 10:37:43 addons-109663 crio[1041]: time="2024-12-16 10:37:43.322622647Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=71a5bd68-1af8-40f8-a060-2d05e2acdaaa name=/runtime.v1.ImageService/PullImage
	Dec 16 10:37:43 addons-109663 crio[1041]: time="2024-12-16 10:37:43.323238994Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9e5124c2-ea9a-4f5f-9959-5c7614c82eb9 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 10:37:43 addons-109663 crio[1041]: time="2024-12-16 10:37:43.324452595Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9e5124c2-ea9a-4f5f-9959-5c7614c82eb9 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 10:37:43 addons-109663 crio[1041]: time="2024-12-16 10:37:43.326193580Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ad445986-08aa-4adf-ac4c-da8e4ded7b21 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 10:37:43 addons-109663 crio[1041]: time="2024-12-16 10:37:43.327176561Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ad445986-08aa-4adf-ac4c-da8e4ded7b21 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 10:37:43 addons-109663 crio[1041]: time="2024-12-16 10:37:43.328052355Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-br7qj/hello-world-app" id=7733217e-2acf-4800-bf1f-fedba486f42c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 10:37:43 addons-109663 crio[1041]: time="2024-12-16 10:37:43.328158984Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 16 10:37:43 addons-109663 crio[1041]: time="2024-12-16 10:37:43.341364639Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/03ae318c7ff1cec3065d7c2fd68dad8cd3ebec157e59aa878c20ac29e21fcf22/merged/etc/passwd: no such file or directory"
	Dec 16 10:37:43 addons-109663 crio[1041]: time="2024-12-16 10:37:43.341394337Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/03ae318c7ff1cec3065d7c2fd68dad8cd3ebec157e59aa878c20ac29e21fcf22/merged/etc/group: no such file or directory"
	Dec 16 10:37:43 addons-109663 crio[1041]: time="2024-12-16 10:37:43.378072255Z" level=info msg="Created container b6ac96fbd2ff784e0fe00a1f772ffebf5fb4eaac1854f9781ffc6384fb2d4a71: default/hello-world-app-55bf9c44b4-br7qj/hello-world-app" id=7733217e-2acf-4800-bf1f-fedba486f42c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 10:37:43 addons-109663 crio[1041]: time="2024-12-16 10:37:43.378607045Z" level=info msg="Starting container: b6ac96fbd2ff784e0fe00a1f772ffebf5fb4eaac1854f9781ffc6384fb2d4a71" id=8bcdaa37-b955-4f43-852b-712c73fc58a7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 10:37:43 addons-109663 crio[1041]: time="2024-12-16 10:37:43.384254350Z" level=info msg="Started container" PID=11076 containerID=b6ac96fbd2ff784e0fe00a1f772ffebf5fb4eaac1854f9781ffc6384fb2d4a71 description=default/hello-world-app-55bf9c44b4-br7qj/hello-world-app id=8bcdaa37-b955-4f43-852b-712c73fc58a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fa2caf81dd3dd6b79e8274a68d3ea74e6c9e0dacfe5922ddb2764dc2c0eb52b7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	b6ac96fbd2ff7       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   fa2caf81dd3dd       hello-world-app-55bf9c44b4-br7qj
	d384a65188bd6       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago            Running             nginx                     0                   85a1116f8171d       nginx
	e0df8328f54c0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago            Running             busybox                   0                   cab241fcb05db       busybox
	14ccae418ccad       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago            Running             controller                0                   96c18162d52c8       ingress-nginx-controller-5f85ff4588-5q5qg
	58e1003acbde1       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago            Running             minikube-ingress-dns      0                   0b71d4cf0076c       kube-ingress-dns-minikube
	35888134a837c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago            Exited              patch                     0                   282892f631a51       ingress-nginx-admission-patch-s5fq7
	5665ce1082fa5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago            Exited              create                    0                   7588d6c1b3726       ingress-nginx-admission-create-287m6
	d244f32e00679       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago            Running             local-path-provisioner    0                   99edf650f528e       local-path-provisioner-86d989889c-j9wdv
	2497007677f8c       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago            Running             metrics-server            0                   f794499539882       metrics-server-84c5f94fbc-z8rzz
	d395437896ee2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago            Running             coredns                   0                   cb969df31de80       coredns-7c65d6cfc9-ksv2k
	cbfe74880d2d7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   f3272849c7731       storage-provisioner
	9a6bfcbfaf469       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3                           4 minutes ago            Running             kindnet-cni               0                   5a7cc27da2525       kindnet-sn2ww
	c1be7640a86c8       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             4 minutes ago            Running             kube-proxy                0                   9f1718b08cd98       kube-proxy-dw2js
	93aca58b0473f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             4 minutes ago            Running             etcd                      0                   7d663deb48a89       etcd-addons-109663
	c2d7f9e7ddfbc       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             4 minutes ago            Running             kube-apiserver            0                   52fae8dd5fd08       kube-apiserver-addons-109663
	c7d6c76bcfec7       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             4 minutes ago            Running             kube-scheduler            0                   3bd33f6501e1d       kube-scheduler-addons-109663
	bb5423f27c7f2       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             4 minutes ago            Running             kube-controller-manager   0                   3c1599239d305       kube-controller-manager-addons-109663
	
	
	==> coredns [d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190] <==
	[INFO] 10.244.0.4:37612 - 40060 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093327s
	[INFO] 10.244.0.4:45179 - 13675 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.003755375s
	[INFO] 10.244.0.4:45179 - 14007 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.00420891s
	[INFO] 10.244.0.4:45946 - 63283 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004315672s
	[INFO] 10.244.0.4:45946 - 62934 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004618056s
	[INFO] 10.244.0.4:44563 - 2352 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003613643s
	[INFO] 10.244.0.4:44563 - 2631 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00430746s
	[INFO] 10.244.0.4:48506 - 5195 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00007273s
	[INFO] 10.244.0.4:48506 - 5016 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110423s
	[INFO] 10.244.0.20:46163 - 36954 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000193657s
	[INFO] 10.244.0.20:51642 - 57430 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000254981s
	[INFO] 10.244.0.20:44452 - 1364 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000156169s
	[INFO] 10.244.0.20:35680 - 23064 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000221411s
	[INFO] 10.244.0.20:51507 - 33886 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135158s
	[INFO] 10.244.0.20:38449 - 8264 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000199952s
	[INFO] 10.244.0.20:36591 - 32512 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006706949s
	[INFO] 10.244.0.20:43494 - 28794 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006820448s
	[INFO] 10.244.0.20:47839 - 33089 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004590375s
	[INFO] 10.244.0.20:55512 - 37006 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005359418s
	[INFO] 10.244.0.20:44275 - 58247 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004085356s
	[INFO] 10.244.0.20:40163 - 60073 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005091282s
	[INFO] 10.244.0.20:48232 - 52746 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000636345s
	[INFO] 10.244.0.20:50476 - 59323 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.000797507s
	[INFO] 10.244.0.27:40836 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000231287s
	[INFO] 10.244.0.27:50723 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000201791s
	
	
	==> describe nodes <==
	Name:               addons-109663
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-109663
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8
	                    minikube.k8s.io/name=addons-109663
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T10_32_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-109663
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 10:32:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-109663
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 10:37:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 10:36:01 +0000   Mon, 16 Dec 2024 10:32:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 10:36:01 +0000   Mon, 16 Dec 2024 10:32:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 10:36:01 +0000   Mon, 16 Dec 2024 10:32:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 10:36:01 +0000   Mon, 16 Dec 2024 10:33:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-109663
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 c878448df26f4703bfd4f4644cd4f6ef
	  System UUID:                1d94d62c-1455-428d-baf9-9d8a353f13c2
	  Boot ID:                    9fd10bb4-c61e-4d88-b4b5-bae725bc9632
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     hello-world-app-55bf9c44b4-br7qj             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-5q5qg    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m33s
	  kube-system                 coredns-7c65d6cfc9-ksv2k                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m40s
	  kube-system                 etcd-addons-109663                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m45s
	  kube-system                 kindnet-sn2ww                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m40s
	  kube-system                 kube-apiserver-addons-109663                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-controller-manager-addons-109663        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-proxy-dw2js                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-scheduler-addons-109663                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 metrics-server-84c5f94fbc-z8rzz              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  local-path-storage          local-path-provisioner-86d989889c-j9wdv      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m38s  kube-proxy       
	  Normal   Starting                 4m46s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m46s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  4m45s  kubelet          Node addons-109663 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m45s  kubelet          Node addons-109663 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m45s  kubelet          Node addons-109663 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m41s  node-controller  Node addons-109663 event: Registered Node addons-109663 in Controller
	  Normal   NodeReady                4m21s  kubelet          Node addons-109663 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de be ee 00 db 5d 08 06
	[  +0.004678] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 d9 73 09 a8 1d 08 06
	[  +8.602351] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 ec 78 2d 3c ff 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 82 7d e3 e9 86 08 06
	[Dec16 09:19] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 d4 08 d7 58 df 08 06
	[  +0.000407] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 82 d9 73 09 a8 1d 08 06
	[Dec16 10:35] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: de d7 2f 49 bb 5f fa 9e 56 e2 a0 0e 08 00
	[  +1.023752] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de d7 2f 49 bb 5f fa 9e 56 e2 a0 0e 08 00
	[  +2.015839] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: de d7 2f 49 bb 5f fa 9e 56 e2 a0 0e 08 00
	[  +4.095632] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de d7 2f 49 bb 5f fa 9e 56 e2 a0 0e 08 00
	[  +8.195350] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de d7 2f 49 bb 5f fa 9e 56 e2 a0 0e 08 00
	[Dec16 10:36] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de d7 2f 49 bb 5f fa 9e 56 e2 a0 0e 08 00
	[ +33.277339] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000012] ll header: 00000000: de d7 2f 49 bb 5f fa 9e 56 e2 a0 0e 08 00
	
	
	==> etcd [93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6] <==
	{"level":"info","ts":"2024-12-16T10:33:07.672486Z","caller":"traceutil/trace.go:171","msg":"trace[793751114] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:377; }","duration":"295.672195ms","start":"2024-12-16T10:33:07.376804Z","end":"2024-12-16T10:33:07.672476Z","steps":["trace[793751114] 'agreement among raft nodes before linearized reading'  (duration: 216.741433ms)","trace[793751114] 'range keys from in-memory index tree'  (duration: 78.883366ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-16T10:33:07.672864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.571774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2024-12-16T10:33:07.672892Z","caller":"traceutil/trace.go:171","msg":"trace[1769695805] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:378; }","duration":"186.604327ms","start":"2024-12-16T10:33:07.486278Z","end":"2024-12-16T10:33:07.672883Z","steps":["trace[1769695805] 'agreement among raft nodes before linearized reading'  (duration: 186.519799ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:07.994499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.96805ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128033944884734075 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-d4bw4\" mod_revision:384 > success:<request_delete_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-d4bw4\" > > failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-d4bw4\" > >>","response":"size:18"}
	{"level":"info","ts":"2024-12-16T10:33:08.071909Z","caller":"traceutil/trace.go:171","msg":"trace[99038856] transaction","detail":"{read_only:false; number_of_response:1; response_revision:392; }","duration":"191.844388ms","start":"2024-12-16T10:33:07.880048Z","end":"2024-12-16T10:33:08.071892Z","steps":["trace[99038856] 'compare'  (duration: 112.892348ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:33:08.072065Z","caller":"traceutil/trace.go:171","msg":"trace[1239885436] linearizableReadLoop","detail":"{readStateIndex:407; appliedIndex:406; }","duration":"191.650397ms","start":"2024-12-16T10:33:07.880403Z","end":"2024-12-16T10:33:08.072053Z","steps":["trace[1239885436] 'read index received'  (duration: 833.119µs)","trace[1239885436] 'applied index is now lower than readState.Index'  (duration: 190.816182ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T10:33:08.072223Z","caller":"traceutil/trace.go:171","msg":"trace[738296694] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"191.759501ms","start":"2024-12-16T10:33:07.880456Z","end":"2024-12-16T10:33:08.072215Z","steps":["trace[738296694] 'process raft request'  (duration: 114.121491ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:33:08.072325Z","caller":"traceutil/trace.go:171","msg":"trace[817934492] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"191.792502ms","start":"2024-12-16T10:33:07.880522Z","end":"2024-12-16T10:33:08.072315Z","steps":["trace[817934492] 'process raft request'  (duration: 114.113915ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:33:08.072437Z","caller":"traceutil/trace.go:171","msg":"trace[960474592] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"191.735918ms","start":"2024-12-16T10:33:07.880693Z","end":"2024-12-16T10:33:08.072429Z","steps":["trace[960474592] 'process raft request'  (duration: 113.973168ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:08.072677Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.261081ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T10:33:08.072709Z","caller":"traceutil/trace.go:171","msg":"trace[1404446973] range","detail":"{range_begin:/registry/services/specs/kube-system/registry; range_end:; response_count:0; response_revision:401; }","duration":"192.300009ms","start":"2024-12-16T10:33:07.880400Z","end":"2024-12-16T10:33:08.072700Z","steps":["trace[1404446973] 'agreement among raft nodes before linearized reading'  (duration: 192.239715ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:08.072859Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.039823ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:260"}
	{"level":"info","ts":"2024-12-16T10:33:08.072886Z","caller":"traceutil/trace.go:171","msg":"trace[381039783] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:401; }","duration":"192.072803ms","start":"2024-12-16T10:33:07.880806Z","end":"2024-12-16T10:33:08.072879Z","steps":["trace[381039783] 'agreement among raft nodes before linearized reading'  (duration: 192.013363ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:08.381071Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.234713ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T10:33:08.381168Z","caller":"traceutil/trace.go:171","msg":"trace[1952366159] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:0; response_revision:425; }","duration":"100.32411ms","start":"2024-12-16T10:33:08.280818Z","end":"2024-12-16T10:33:08.381142Z","steps":["trace[1952366159] 'agreement among raft nodes before linearized reading'  (duration: 100.095009ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:08.485917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.755156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" ","response":"range_response_count:1 size:3350"}
	{"level":"info","ts":"2024-12-16T10:33:08.486052Z","caller":"traceutil/trace.go:171","msg":"trace[816096498] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:426; }","duration":"199.895603ms","start":"2024-12-16T10:33:08.286139Z","end":"2024-12-16T10:33:08.486035Z","steps":["trace[816096498] 'agreement among raft nodes before linearized reading'  (duration: 95.906549ms)","trace[816096498] 'range keys from in-memory index tree'  (duration: 92.502138ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T10:33:08.486128Z","caller":"traceutil/trace.go:171","msg":"trace[641752804] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"103.967584ms","start":"2024-12-16T10:33:08.382149Z","end":"2024-12-16T10:33:08.486117Z","steps":["trace[641752804] 'process raft request'  (duration: 91.625966ms)","trace[641752804] 'compare'  (duration: 11.954162ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T10:34:05.693646Z","caller":"traceutil/trace.go:171","msg":"trace[1539685743] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"105.470497ms","start":"2024-12-16T10:34:05.588152Z","end":"2024-12-16T10:34:05.693622Z","steps":["trace[1539685743] 'process raft request'  (duration: 88.430195ms)","trace[1539685743] 'compare'  (duration: 16.935916ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T10:34:28.616542Z","caller":"traceutil/trace.go:171","msg":"trace[1208903567] linearizableReadLoop","detail":"{readStateIndex:1235; appliedIndex:1234; }","duration":"116.984843ms","start":"2024-12-16T10:34:28.499541Z","end":"2024-12-16T10:34:28.616526Z","steps":["trace[1208903567] 'read index received'  (duration: 54.381885ms)","trace[1208903567] 'applied index is now lower than readState.Index'  (duration: 62.602481ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T10:34:28.616682Z","caller":"traceutil/trace.go:171","msg":"trace[608077655] transaction","detail":"{read_only:false; response_revision:1197; number_of_response:1; }","duration":"197.753384ms","start":"2024-12-16T10:34:28.418905Z","end":"2024-12-16T10:34:28.616659Z","steps":["trace[608077655] 'process raft request'  (duration: 135.083294ms)","trace[608077655] 'compare'  (duration: 62.448167ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-16T10:34:28.616727Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.057461ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"warn","ts":"2024-12-16T10:34:28.616733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.167393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-z8rzz\" ","response":"range_response_count:1 size:4862"}
	{"level":"info","ts":"2024-12-16T10:34:28.616768Z","caller":"traceutil/trace.go:171","msg":"trace[312161016] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1197; }","duration":"117.108854ms","start":"2024-12-16T10:34:28.499643Z","end":"2024-12-16T10:34:28.616752Z","steps":["trace[312161016] 'agreement among raft nodes before linearized reading'  (duration: 116.977084ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:34:28.616774Z","caller":"traceutil/trace.go:171","msg":"trace[1404395514] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-z8rzz; range_end:; response_count:1; response_revision:1197; }","duration":"117.232126ms","start":"2024-12-16T10:34:28.499532Z","end":"2024-12-16T10:34:28.616764Z","steps":["trace[1404395514] 'agreement among raft nodes before linearized reading'  (duration: 117.089497ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:37:43 up  3:20,  0 users,  load average: 0.14, 30.99, 85.30
	Linux addons-109663 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9] <==
	I1216 10:35:41.673431       1 main.go:301] handling current node
	I1216 10:35:51.672575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:35:51.672625       1 main.go:301] handling current node
	I1216 10:36:01.672946       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:36:01.672982       1 main.go:301] handling current node
	I1216 10:36:11.673658       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:36:11.673724       1 main.go:301] handling current node
	I1216 10:36:21.679547       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:36:21.679588       1 main.go:301] handling current node
	I1216 10:36:31.672627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:36:31.672666       1 main.go:301] handling current node
	I1216 10:36:41.679607       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:36:41.679642       1 main.go:301] handling current node
	I1216 10:36:51.681302       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:36:51.681337       1 main.go:301] handling current node
	I1216 10:37:01.679573       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:37:01.679605       1 main.go:301] handling current node
	I1216 10:37:11.673589       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:37:11.673624       1 main.go:301] handling current node
	I1216 10:37:21.681414       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:37:21.681454       1 main.go:301] handling current node
	I1216 10:37:31.681501       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:37:31.681540       1 main.go:301] handling current node
	I1216 10:37:41.681577       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:37:41.681608       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804] <==
	E1216 10:34:32.691074       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.203.234:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.203.234:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.203.234:443: connect: connection refused" logger="UnhandledError"
	E1216 10:34:32.692656       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.203.234:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.203.234:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.203.234:443: connect: connection refused" logger="UnhandledError"
	I1216 10:34:32.723275       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 10:34:51.896672       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38980: use of closed network connection
	E1216 10:34:52.055857       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39006: use of closed network connection
	I1216 10:35:01.015105       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.13.210"}
	I1216 10:35:21.588891       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1216 10:35:21.753463       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.56.95"}
	I1216 10:35:23.395373       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1216 10:35:24.473039       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1216 10:35:48.692045       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1216 10:36:01.744832       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:36:01.744890       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 10:36:01.758462       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:36:01.758509       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 10:36:01.758956       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:36:01.759012       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 10:36:01.772864       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:36:01.773001       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 10:36:01.783253       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:36:01.783298       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1216 10:36:02.759800       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1216 10:36:02.784525       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1216 10:36:02.880411       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1216 10:37:42.590874       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.178.29"}
	
	
	==> kube-controller-manager [bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50] <==
	E1216 10:36:17.730622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:36:22.386542       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:36:22.386593       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:36:22.494051       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:36:22.494088       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:36:32.538411       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:36:32.538455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:36:41.463757       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:36:41.463802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:36:42.562182       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:36:42.562224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:36:42.664122       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:36:42.664158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:37:08.405172       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:37:08.405218       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:37:09.740745       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:37:09.740787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:37:10.617185       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:37:10.617231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:37:28.877574       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:37:28.877637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1216 10:37:42.393681       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.088603ms"
	I1216 10:37:42.398659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="4.925663ms"
	I1216 10:37:42.398743       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.342µs"
	I1216 10:37:42.404194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="83.43µs"
	
	
	==> kube-proxy [c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc] <==
	I1216 10:33:04.183324       1 server_linux.go:66] "Using iptables proxy"
	I1216 10:33:04.591563       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1216 10:33:04.678101       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 10:33:05.489706       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 10:33:05.489861       1 server_linux.go:169] "Using iptables Proxier"
	I1216 10:33:05.694015       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 10:33:05.694429       1 server.go:483] "Version info" version="v1.31.2"
	I1216 10:33:05.694453       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 10:33:05.788634       1 config.go:199] "Starting service config controller"
	I1216 10:33:06.171703       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 10:33:06.171734       1 shared_informer.go:320] Caches are synced for service config
	I1216 10:33:05.790467       1 config.go:105] "Starting endpoint slice config controller"
	I1216 10:33:06.171782       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 10:33:06.171788       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 10:33:05.790422       1 config.go:328] "Starting node config controller"
	I1216 10:33:06.171865       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 10:33:06.171872       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53] <==
	W1216 10:32:55.981327       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 10:32:55.981346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1216 10:32:55.981349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E1216 10:32:55.981313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:55.981466       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1216 10:32:55.981508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 10:32:55.981539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:55.981556       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 10:32:55.981551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1216 10:32:55.981651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:55.981595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 10:32:55.981694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:55.981595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1216 10:32:55.981721       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:55.981602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 10:32:55.981745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:56.856381       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1216 10:32:56.856434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:56.883003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 10:32:56.883044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:56.921571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 10:32:56.921608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:56.963959       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 10:32:56.964001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1216 10:32:57.377547       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 10:37:38 addons-109663 kubelet[1650]: E1216 10:37:38.231358    1650 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345458231192433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617956,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:37:38 addons-109663 kubelet[1650]: E1216 10:37:38.231392    1650 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345458231192433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617956,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: E1216 10:37:42.394676    1650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="856ef16b-5b68-404c-8df4-558dc73fe76b" containerName="node-driver-registrar"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: E1216 10:37:42.394725    1650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="856ef16b-5b68-404c-8df4-558dc73fe76b" containerName="csi-snapshotter"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: E1216 10:37:42.394736    1650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="963124d9-8e43-4fb9-a011-05c542d2fb50" containerName="csi-resizer"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: E1216 10:37:42.394744    1650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="856ef16b-5b68-404c-8df4-558dc73fe76b" containerName="csi-provisioner"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: E1216 10:37:42.394754    1650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="62bd9cad-e4a7-474c-9ce0-bb38412ded35" containerName="volume-snapshot-controller"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: E1216 10:37:42.394762    1650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="856ef16b-5b68-404c-8df4-558dc73fe76b" containerName="hostpath"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: E1216 10:37:42.394772    1650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="856ef16b-5b68-404c-8df4-558dc73fe76b" containerName="liveness-probe"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: E1216 10:37:42.394782    1650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9089b466-c717-4755-bf51-2740aecfaeb6" containerName="csi-attacher"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: E1216 10:37:42.394790    1650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="07292c74-48a5-4558-9412-61806490f959" containerName="task-pv-container"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: E1216 10:37:42.394799    1650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29ea6b74-8543-4d6d-a9f0-8476aaef7f19" containerName="volume-snapshot-controller"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: E1216 10:37:42.394807    1650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="856ef16b-5b68-404c-8df4-558dc73fe76b" containerName="csi-external-health-monitor-controller"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: I1216 10:37:42.394878    1650 memory_manager.go:354] "RemoveStaleState removing state" podUID="856ef16b-5b68-404c-8df4-558dc73fe76b" containerName="csi-external-health-monitor-controller"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: I1216 10:37:42.394890    1650 memory_manager.go:354] "RemoveStaleState removing state" podUID="856ef16b-5b68-404c-8df4-558dc73fe76b" containerName="hostpath"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: I1216 10:37:42.394900    1650 memory_manager.go:354] "RemoveStaleState removing state" podUID="856ef16b-5b68-404c-8df4-558dc73fe76b" containerName="csi-provisioner"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: I1216 10:37:42.394909    1650 memory_manager.go:354] "RemoveStaleState removing state" podUID="963124d9-8e43-4fb9-a011-05c542d2fb50" containerName="csi-resizer"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: I1216 10:37:42.394916    1650 memory_manager.go:354] "RemoveStaleState removing state" podUID="07292c74-48a5-4558-9412-61806490f959" containerName="task-pv-container"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: I1216 10:37:42.394927    1650 memory_manager.go:354] "RemoveStaleState removing state" podUID="9089b466-c717-4755-bf51-2740aecfaeb6" containerName="csi-attacher"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: I1216 10:37:42.394935    1650 memory_manager.go:354] "RemoveStaleState removing state" podUID="856ef16b-5b68-404c-8df4-558dc73fe76b" containerName="csi-snapshotter"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: I1216 10:37:42.394943    1650 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ea6b74-8543-4d6d-a9f0-8476aaef7f19" containerName="volume-snapshot-controller"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: I1216 10:37:42.394951    1650 memory_manager.go:354] "RemoveStaleState removing state" podUID="62bd9cad-e4a7-474c-9ce0-bb38412ded35" containerName="volume-snapshot-controller"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: I1216 10:37:42.394958    1650 memory_manager.go:354] "RemoveStaleState removing state" podUID="856ef16b-5b68-404c-8df4-558dc73fe76b" containerName="node-driver-registrar"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: I1216 10:37:42.394967    1650 memory_manager.go:354] "RemoveStaleState removing state" podUID="856ef16b-5b68-404c-8df4-558dc73fe76b" containerName="liveness-probe"
	Dec 16 10:37:42 addons-109663 kubelet[1650]: I1216 10:37:42.571791    1650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8df6\" (UniqueName: \"kubernetes.io/projected/6046c7ab-0532-4ad2-907c-cbe45f15d836-kube-api-access-h8df6\") pod \"hello-world-app-55bf9c44b4-br7qj\" (UID: \"6046c7ab-0532-4ad2-907c-cbe45f15d836\") " pod="default/hello-world-app-55bf9c44b4-br7qj"
	
	
	==> storage-provisioner [cbfe74880d2d74600d5e828c17a093b09e9242e83f220b8981aab484b98eba00] <==
	I1216 10:33:23.101386       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 10:33:23.109387       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 10:33:23.109441       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 10:33:23.119725       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 10:33:23.119895       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-109663_ffc62bba-699e-4bb1-b733-f38ab028cbbd!
	I1216 10:33:23.120209       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"172750c2-26af-46b6-a829-2003eae424b5", APIVersion:"v1", ResourceVersion:"892", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-109663_ffc62bba-699e-4bb1-b733-f38ab028cbbd became leader
	I1216 10:33:23.272006       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-109663_ffc62bba-699e-4bb1-b733-f38ab028cbbd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-109663 -n addons-109663
helpers_test.go:261: (dbg) Run:  kubectl --context addons-109663 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-287m6 ingress-nginx-admission-patch-s5fq7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-109663 describe pod ingress-nginx-admission-create-287m6 ingress-nginx-admission-patch-s5fq7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-109663 describe pod ingress-nginx-admission-create-287m6 ingress-nginx-admission-patch-s5fq7: exit status 1 (54.138643ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-287m6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-s5fq7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-109663 describe pod ingress-nginx-admission-create-287m6 ingress-nginx-admission-patch-s5fq7: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-109663 addons disable ingress-dns --alsologtostderr -v=1: (1.496744236s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-109663 addons disable ingress --alsologtostderr -v=1: (7.675734576s)
--- FAIL: TestAddons/parallel/Ingress (152.57s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (359.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.086436ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-z8rzz" [0c4013ee-0e9e-4bf6-aff8-752bb76b1c0c] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002521064s
addons_test.go:402: (dbg) Run:  kubectl --context addons-109663 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-109663 top pods -n kube-system: exit status 1 (59.281008ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ksv2k, age: 2m2.410420686s

                                                
                                                
** /stderr **
I1216 10:35:05.413079  847292 retry.go:31] will retry after 1.975227244s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-109663 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-109663 top pods -n kube-system: exit status 1 (60.053637ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ksv2k, age: 2m4.446221881s

                                                
                                                
** /stderr **
I1216 10:35:07.449130  847292 retry.go:31] will retry after 3.078195551s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-109663 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-109663 top pods -n kube-system: exit status 1 (56.456544ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ksv2k, age: 2m7.581966345s

                                                
                                                
** /stderr **
I1216 10:35:10.584479  847292 retry.go:31] will retry after 8.208311653s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-109663 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-109663 top pods -n kube-system: exit status 1 (58.911763ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ksv2k, age: 2m15.849589208s

                                                
                                                
** /stderr **
I1216 10:35:18.852215  847292 retry.go:31] will retry after 11.555132556s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-109663 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-109663 top pods -n kube-system: exit status 1 (57.011226ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ksv2k, age: 2m27.462201556s

                                                
                                                
** /stderr **
I1216 10:35:30.464739  847292 retry.go:31] will retry after 10.926668976s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-109663 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-109663 top pods -n kube-system: exit status 1 (73.648139ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ksv2k, age: 2m38.463589226s

                                                
                                                
** /stderr **
I1216 10:35:41.466304  847292 retry.go:31] will retry after 23.611692898s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-109663 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-109663 top pods -n kube-system: exit status 1 (58.037486ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ksv2k, age: 3m2.13441262s

                                                
                                                
** /stderr **
I1216 10:36:05.137073  847292 retry.go:31] will retry after 41.041179734s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-109663 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-109663 top pods -n kube-system: exit status 1 (61.418574ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ksv2k, age: 3m43.238186465s

                                                
                                                
** /stderr **
I1216 10:36:46.240694  847292 retry.go:31] will retry after 54.568523623s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-109663 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-109663 top pods -n kube-system: exit status 1 (57.101194ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ksv2k, age: 4m37.868004032s

                                                
                                                
** /stderr **
I1216 10:37:40.870819  847292 retry.go:31] will retry after 1m10.080787552s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-109663 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-109663 top pods -n kube-system: exit status 1 (58.589421ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ksv2k, age: 5m48.009765679s

                                                
                                                
** /stderr **
I1216 10:38:51.012499  847292 retry.go:31] will retry after 1m9.073465657s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-109663 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-109663 top pods -n kube-system: exit status 1 (58.677369ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ksv2k, age: 6m57.144921088s

                                                
                                                
** /stderr **
I1216 10:40:00.147509  847292 retry.go:31] will retry after 57.531798256s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-109663 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-109663 top pods -n kube-system: exit status 1 (58.76393ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-ksv2k, age: 7m54.737290619s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-109663
helpers_test.go:235: (dbg) docker inspect addons-109663:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1a5d30b35ebd94d45b4b20f053e9d801bffac6feb46db54b983452fbca50984b",
	        "Created": "2024-12-16T10:32:42.208735109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 849348,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-16T10:32:42.321849535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7036ee4d70b7e266f67949e27a52ed21246dbdde9902b1d29235748548d311cb",
	        "ResolvConfPath": "/var/lib/docker/containers/1a5d30b35ebd94d45b4b20f053e9d801bffac6feb46db54b983452fbca50984b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1a5d30b35ebd94d45b4b20f053e9d801bffac6feb46db54b983452fbca50984b/hostname",
	        "HostsPath": "/var/lib/docker/containers/1a5d30b35ebd94d45b4b20f053e9d801bffac6feb46db54b983452fbca50984b/hosts",
	        "LogPath": "/var/lib/docker/containers/1a5d30b35ebd94d45b4b20f053e9d801bffac6feb46db54b983452fbca50984b/1a5d30b35ebd94d45b4b20f053e9d801bffac6feb46db54b983452fbca50984b-json.log",
	        "Name": "/addons-109663",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-109663:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-109663",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65804e8ecf53a4a783bcbd11ff1ee57774d652a79d14faa51abcf74021f9f0a6-init/diff:/var/lib/docker/overlay2/123e2f1df366b4ca43a26782c77043f0e4cd5c6388fa90b6b3300da767616189/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65804e8ecf53a4a783bcbd11ff1ee57774d652a79d14faa51abcf74021f9f0a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65804e8ecf53a4a783bcbd11ff1ee57774d652a79d14faa51abcf74021f9f0a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65804e8ecf53a4a783bcbd11ff1ee57774d652a79d14faa51abcf74021f9f0a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-109663",
	                "Source": "/var/lib/docker/volumes/addons-109663/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-109663",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-109663",
	                "name.minikube.sigs.k8s.io": "addons-109663",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6c0a3e7ab167392b4b1457a91806bd78d3a67f0fd8e01a37251db9ff03c74d5d",
	            "SandboxKey": "/var/run/docker/netns/6c0a3e7ab167",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-109663": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8d8d19425ae9a0d7e09aa1deae754ccc44dc321a7589581cd2cc49ee9d8127e2",
	                    "EndpointID": "1fd1a8fade4259280c934e3bd3078705e00cc2e63230df4f97442f57a51b046a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-109663",
	                        "1a5d30b35ebd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-109663 -n addons-109663
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-109663 logs -n 25: (1.032574419s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-072674                                                                   | download-docker-072674 | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC | 16 Dec 24 10:32 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-516574   | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC |                     |
	|         | binary-mirror-516574                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32893                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-516574                                                                     | binary-mirror-516574   | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC | 16 Dec 24 10:32 UTC |
	| addons  | enable dashboard -p                                                                         | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC |                     |
	|         | addons-109663                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC |                     |
	|         | addons-109663                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-109663 --wait=true                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC | 16 Dec 24 10:34 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:35 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | -p addons-109663                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-109663 addons                                                                        | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-109663 ssh cat                                                                       | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | /opt/local-path-provisioner/pvc-9e504c9a-bb3a-4229-9525-d31715212760_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-109663 ip                                                                            | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-109663 addons                                                                        | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-109663 addons                                                                        | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-109663 ssh curl -s                                                                   | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-109663 addons                                                                        | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:36 UTC | 16 Dec 24 10:36 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-109663 addons                                                                        | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:36 UTC | 16 Dec 24 10:36 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-109663 ip                                                                            | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:37 UTC | 16 Dec 24 10:37 UTC |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:37 UTC | 16 Dec 24 10:37 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-109663 addons disable                                                                | addons-109663          | jenkins | v1.34.0 | 16 Dec 24 10:37 UTC | 16 Dec 24 10:37 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 10:32:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 10:32:20.176960  848599 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:32:20.177056  848599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:32:20.177064  848599 out.go:358] Setting ErrFile to fd 2...
	I1216 10:32:20.177068  848599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:32:20.177239  848599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
	I1216 10:32:20.177825  848599 out.go:352] Setting JSON to false
	I1216 10:32:20.178694  848599 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11687,"bootTime":1734333453,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 10:32:20.178790  848599 start.go:139] virtualization: kvm guest
	I1216 10:32:20.180687  848599 out.go:177] * [addons-109663] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 10:32:20.182103  848599 notify.go:220] Checking for updates...
	I1216 10:32:20.182122  848599 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 10:32:20.183273  848599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 10:32:20.184504  848599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig
	I1216 10:32:20.185694  848599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube
	I1216 10:32:20.186976  848599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 10:32:20.188067  848599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 10:32:20.189305  848599 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 10:32:20.210238  848599 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 10:32:20.210385  848599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 10:32:20.255369  848599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-16 10:32:20.24671902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 10:32:20.255520  848599 docker.go:318] overlay module found
	I1216 10:32:20.257178  848599 out.go:177] * Using the docker driver based on user configuration
	I1216 10:32:20.258429  848599 start.go:297] selected driver: docker
	I1216 10:32:20.258449  848599 start.go:901] validating driver "docker" against <nil>
	I1216 10:32:20.258461  848599 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 10:32:20.259277  848599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 10:32:20.303533  848599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-16 10:32:20.295513369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 10:32:20.303701  848599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 10:32:20.303936  848599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 10:32:20.305297  848599 out.go:177] * Using Docker driver with root privileges
	I1216 10:32:20.306405  848599 cni.go:84] Creating CNI manager for ""
	I1216 10:32:20.306461  848599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 10:32:20.306471  848599 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 10:32:20.306562  848599 start.go:340] cluster config:
	{Name:addons-109663 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-109663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 10:32:20.307714  848599 out.go:177] * Starting "addons-109663" primary control-plane node in "addons-109663" cluster
	I1216 10:32:20.308731  848599 cache.go:121] Beginning downloading kic base image for docker with crio
	I1216 10:32:20.309955  848599 out.go:177] * Pulling base image v0.0.45-1733912881-20083 ...
	I1216 10:32:20.311129  848599 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 10:32:20.311157  848599 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-840384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1216 10:32:20.311163  848599 cache.go:56] Caching tarball of preloaded images
	I1216 10:32:20.311160  848599 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local docker daemon
	I1216 10:32:20.311232  848599 preload.go:172] Found /home/jenkins/minikube-integration/20107-840384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 10:32:20.311243  848599 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1216 10:32:20.311587  848599 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/config.json ...
	I1216 10:32:20.311614  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/config.json: {Name:mkeda270ee12e3e9c2b3f96211254f0d67bf6da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:20.325703  848599 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 to local cache
	I1216 10:32:20.325805  848599 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory
	I1216 10:32:20.325824  848599 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory, skipping pull
	I1216 10:32:20.325831  848599 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 exists in cache, skipping pull
	I1216 10:32:20.325842  848599 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 as a tarball
	I1216 10:32:20.325853  848599 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 from local cache
	I1216 10:32:32.270421  848599 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 from cached tarball
	I1216 10:32:32.270469  848599 cache.go:194] Successfully downloaded all kic artifacts
	I1216 10:32:32.270526  848599 start.go:360] acquireMachinesLock for addons-109663: {Name:mk322ac902230420e2cfa3c4d031bb3cb0c61bc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 10:32:32.270650  848599 start.go:364] duration metric: took 96.592µs to acquireMachinesLock for "addons-109663"
	I1216 10:32:32.270692  848599 start.go:93] Provisioning new machine with config: &{Name:addons-109663 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-109663 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 10:32:32.270785  848599 start.go:125] createHost starting for "" (driver="docker")
	I1216 10:32:32.272482  848599 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1216 10:32:32.272745  848599 start.go:159] libmachine.API.Create for "addons-109663" (driver="docker")
	I1216 10:32:32.272789  848599 client.go:168] LocalClient.Create starting
	I1216 10:32:32.272894  848599 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca.pem
	I1216 10:32:32.524572  848599 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/cert.pem
	I1216 10:32:32.623176  848599 cli_runner.go:164] Run: docker network inspect addons-109663 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 10:32:32.639140  848599 cli_runner.go:211] docker network inspect addons-109663 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 10:32:32.639223  848599 network_create.go:284] running [docker network inspect addons-109663] to gather additional debugging logs...
	I1216 10:32:32.639249  848599 cli_runner.go:164] Run: docker network inspect addons-109663
	W1216 10:32:32.654825  848599 cli_runner.go:211] docker network inspect addons-109663 returned with exit code 1
	I1216 10:32:32.654856  848599 network_create.go:287] error running [docker network inspect addons-109663]: docker network inspect addons-109663: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-109663 not found
	I1216 10:32:32.654870  848599 network_create.go:289] output of [docker network inspect addons-109663]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-109663 not found
	
	** /stderr **
	I1216 10:32:32.654959  848599 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 10:32:32.670365  848599 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004f4fa0}
	I1216 10:32:32.670414  848599 network_create.go:124] attempt to create docker network addons-109663 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 10:32:32.670452  848599 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-109663 addons-109663
	I1216 10:32:32.728359  848599 network_create.go:108] docker network addons-109663 192.168.49.0/24 created
	I1216 10:32:32.728388  848599 kic.go:121] calculated static IP "192.168.49.2" for the "addons-109663" container
	I1216 10:32:32.728453  848599 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 10:32:32.743748  848599 cli_runner.go:164] Run: docker volume create addons-109663 --label name.minikube.sigs.k8s.io=addons-109663 --label created_by.minikube.sigs.k8s.io=true
	I1216 10:32:32.759894  848599 oci.go:103] Successfully created a docker volume addons-109663
	I1216 10:32:32.759977  848599 cli_runner.go:164] Run: docker run --rm --name addons-109663-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-109663 --entrypoint /usr/bin/test -v addons-109663:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -d /var/lib
	I1216 10:32:37.657657  848599 cli_runner.go:217] Completed: docker run --rm --name addons-109663-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-109663 --entrypoint /usr/bin/test -v addons-109663:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -d /var/lib: (4.897631821s)
	I1216 10:32:37.657697  848599 oci.go:107] Successfully prepared a docker volume addons-109663
	I1216 10:32:37.657718  848599 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 10:32:37.657747  848599 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 10:32:37.657821  848599 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20107-840384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-109663:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 10:32:42.147706  848599 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20107-840384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-109663:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -I lz4 -xf /preloaded.tar -C /extractDir: (4.489834787s)
	I1216 10:32:42.147741  848599 kic.go:203] duration metric: took 4.489992007s to extract preloaded images to volume ...
	W1216 10:32:42.147865  848599 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 10:32:42.147983  848599 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 10:32:42.194676  848599 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-109663 --name addons-109663 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-109663 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-109663 --network addons-109663 --ip 192.168.49.2 --volume addons-109663:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2
	I1216 10:32:42.492595  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Running}}
	I1216 10:32:42.509777  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:32:42.526135  848599 cli_runner.go:164] Run: docker exec addons-109663 stat /var/lib/dpkg/alternatives/iptables
	I1216 10:32:42.563632  848599 oci.go:144] the created container "addons-109663" has a running status.
	I1216 10:32:42.563664  848599 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa...
	I1216 10:32:42.655608  848599 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 10:32:42.674141  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:32:42.690709  848599 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 10:32:42.690729  848599 kic_runner.go:114] Args: [docker exec --privileged addons-109663 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 10:32:42.733555  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:32:42.751672  848599 machine.go:93] provisionDockerMachine start ...
	I1216 10:32:42.751782  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:42.769939  848599 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:42.770137  848599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1216 10:32:42.770149  848599 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 10:32:42.770885  848599 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52954->127.0.0.1:33139: read: connection reset by peer
	I1216 10:32:45.894395  848599 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-109663
	
	I1216 10:32:45.894432  848599 ubuntu.go:169] provisioning hostname "addons-109663"
	I1216 10:32:45.894492  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:45.910952  848599 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:45.911128  848599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1216 10:32:45.911140  848599 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-109663 && echo "addons-109663" | sudo tee /etc/hostname
	I1216 10:32:46.045111  848599 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-109663
	
	I1216 10:32:46.045193  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:46.061625  848599 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:46.061807  848599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1216 10:32:46.061823  848599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-109663' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-109663/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-109663' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 10:32:46.186853  848599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 10:32:46.186874  848599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20107-840384/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-840384/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-840384/.minikube}
	I1216 10:32:46.186896  848599 ubuntu.go:177] setting up certificates
	I1216 10:32:46.186907  848599 provision.go:84] configureAuth start
	I1216 10:32:46.186952  848599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-109663
	I1216 10:32:46.201955  848599 provision.go:143] copyHostCerts
	I1216 10:32:46.202017  848599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-840384/.minikube/ca.pem (1082 bytes)
	I1216 10:32:46.202141  848599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-840384/.minikube/cert.pem (1123 bytes)
	I1216 10:32:46.202206  848599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-840384/.minikube/key.pem (1675 bytes)
	I1216 10:32:46.202267  848599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-840384/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca-key.pem org=jenkins.addons-109663 san=[127.0.0.1 192.168.49.2 addons-109663 localhost minikube]
	I1216 10:32:46.342382  848599 provision.go:177] copyRemoteCerts
	I1216 10:32:46.342433  848599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 10:32:46.342468  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:46.358354  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:32:46.448060  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 10:32:46.469314  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 10:32:46.489705  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 10:32:46.509732  848599 provision.go:87] duration metric: took 322.814241ms to configureAuth
	I1216 10:32:46.509759  848599 ubuntu.go:193] setting minikube options for container-runtime
	I1216 10:32:46.509910  848599 config.go:182] Loaded profile config "addons-109663": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:32:46.510000  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:46.526470  848599 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:46.526646  848599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 127.0.0.1 33139 <nil> <nil>}
	I1216 10:32:46.526667  848599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 10:32:46.731259  848599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 10:32:46.731288  848599 machine.go:96] duration metric: took 3.97958665s to provisionDockerMachine
	I1216 10:32:46.731304  848599 client.go:171] duration metric: took 14.458503354s to LocalClient.Create
	I1216 10:32:46.731327  848599 start.go:167] duration metric: took 14.458580941s to libmachine.API.Create "addons-109663"
	I1216 10:32:46.731337  848599 start.go:293] postStartSetup for "addons-109663" (driver="docker")
	I1216 10:32:46.731348  848599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 10:32:46.731400  848599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 10:32:46.731446  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:46.748035  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:32:46.835559  848599 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 10:32:46.838341  848599 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 10:32:46.838368  848599 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1216 10:32:46.838385  848599 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1216 10:32:46.838394  848599 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1216 10:32:46.838411  848599 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-840384/.minikube/addons for local assets ...
	I1216 10:32:46.838464  848599 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-840384/.minikube/files for local assets ...
	I1216 10:32:46.838507  848599 start.go:296] duration metric: took 107.161933ms for postStartSetup
	I1216 10:32:46.838809  848599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-109663
	I1216 10:32:46.854233  848599 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/config.json ...
	I1216 10:32:46.854469  848599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 10:32:46.854512  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:46.869838  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:32:46.959575  848599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 10:32:46.963386  848599 start.go:128] duration metric: took 14.692586018s to createHost
	I1216 10:32:46.963416  848599 start.go:83] releasing machines lock for "addons-109663", held for 14.692749507s
	I1216 10:32:46.963496  848599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-109663
	I1216 10:32:46.978640  848599 ssh_runner.go:195] Run: cat /version.json
	I1216 10:32:46.978677  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:46.978701  848599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 10:32:46.978764  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:32:46.995635  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:32:46.996158  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:32:47.078650  848599 ssh_runner.go:195] Run: systemctl --version
	I1216 10:32:47.143743  848599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 10:32:47.278506  848599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 10:32:47.282415  848599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 10:32:47.299071  848599 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1216 10:32:47.299151  848599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 10:32:47.324690  848599 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1216 10:32:47.324713  848599 start.go:495] detecting cgroup driver to use...
	I1216 10:32:47.324748  848599 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 10:32:47.324785  848599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 10:32:47.338062  848599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 10:32:47.347179  848599 docker.go:217] disabling cri-docker service (if available) ...
	I1216 10:32:47.347216  848599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 10:32:47.358730  848599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 10:32:47.370504  848599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 10:32:47.449813  848599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 10:32:47.518777  848599 docker.go:233] disabling docker service ...
	I1216 10:32:47.518823  848599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 10:32:47.535753  848599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 10:32:47.545023  848599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 10:32:47.617016  848599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 10:32:47.691960  848599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 10:32:47.701141  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 10:32:47.714441  848599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 10:32:47.714485  848599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:47.722745  848599 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 10:32:47.722785  848599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:47.731282  848599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:47.739260  848599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:47.747136  848599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 10:32:47.754606  848599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:47.762450  848599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:47.775303  848599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:47.783168  848599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 10:32:47.789866  848599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 10:32:47.796667  848599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 10:32:47.867618  848599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 10:32:47.965258  848599 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 10:32:47.965321  848599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 10:32:47.968509  848599 start.go:563] Will wait 60s for crictl version
	I1216 10:32:47.968560  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:32:47.971345  848599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 10:32:48.003861  848599 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1216 10:32:48.003945  848599 ssh_runner.go:195] Run: crio --version
	I1216 10:32:48.037600  848599 ssh_runner.go:195] Run: crio --version
	I1216 10:32:48.070624  848599 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1216 10:32:48.071731  848599 cli_runner.go:164] Run: docker network inspect addons-109663 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 10:32:48.086790  848599 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 10:32:48.089949  848599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 10:32:48.099631  848599 kubeadm.go:883] updating cluster {Name:addons-109663 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-109663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 10:32:48.099753  848599 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 10:32:48.099811  848599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 10:32:48.162586  848599 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 10:32:48.162609  848599 crio.go:433] Images already preloaded, skipping extraction
	I1216 10:32:48.162661  848599 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 10:32:48.193727  848599 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 10:32:48.193748  848599 cache_images.go:84] Images are preloaded, skipping loading
	I1216 10:32:48.193759  848599 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1216 10:32:48.193856  848599 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-109663 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-109663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 10:32:48.193930  848599 ssh_runner.go:195] Run: crio config
	I1216 10:32:48.233450  848599 cni.go:84] Creating CNI manager for ""
	I1216 10:32:48.233469  848599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 10:32:48.233479  848599 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 10:32:48.233499  848599 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-109663 NodeName:addons-109663 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 10:32:48.233626  848599 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-109663"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 10:32:48.233678  848599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1216 10:32:48.241287  848599 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 10:32:48.241353  848599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 10:32:48.248690  848599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 10:32:48.263749  848599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 10:32:48.278768  848599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1216 10:32:48.293446  848599 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 10:32:48.296360  848599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 10:32:48.305456  848599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 10:32:48.385855  848599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 10:32:48.396926  848599 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663 for IP: 192.168.49.2
	I1216 10:32:48.396946  848599 certs.go:194] generating shared ca certs ...
	I1216 10:32:48.396972  848599 certs.go:226] acquiring lock for ca certs: {Name:mkc11fd68d423e1cca90bec28435e0a6c7ecf1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.397158  848599 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-840384/.minikube/ca.key
	I1216 10:32:48.466160  848599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-840384/.minikube/ca.crt ...
	I1216 10:32:48.466182  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/ca.crt: {Name:mk1859f6bdff9985876c6f50db5f2d1280c287c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.466320  848599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-840384/.minikube/ca.key ...
	I1216 10:32:48.466340  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/ca.key: {Name:mk92e534493378752c6e08cd41ae73570fe64ae3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.466434  848599 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-840384/.minikube/proxy-client-ca.key
	I1216 10:32:48.531063  848599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-840384/.minikube/proxy-client-ca.crt ...
	I1216 10:32:48.531083  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/proxy-client-ca.crt: {Name:mkbd2c16dce66b8bd8800e09edb15d99e74a3dee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.531214  848599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-840384/.minikube/proxy-client-ca.key ...
	I1216 10:32:48.531227  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/proxy-client-ca.key: {Name:mkbd79e00e7ff3c72871d6c44df9bbc55c8438ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.531295  848599 certs.go:256] generating profile certs ...
	I1216 10:32:48.531350  848599 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.key
	I1216 10:32:48.531370  848599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt with IP's: []
	I1216 10:32:48.582934  848599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt ...
	I1216 10:32:48.582951  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: {Name:mk7800253813d63a2b9feff6a9f93fbd096ed71c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.583055  848599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.key ...
	I1216 10:32:48.583065  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.key: {Name:mk68a33bf12fa88f4decce469c4693c84cfbbe9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.583137  848599 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.key.5a4e409c
	I1216 10:32:48.583153  848599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.crt.5a4e409c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 10:32:48.795073  848599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.crt.5a4e409c ...
	I1216 10:32:48.795097  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.crt.5a4e409c: {Name:mkb1a96aec38a507038981d80b8c62dd0085ece6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.795228  848599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.key.5a4e409c ...
	I1216 10:32:48.795240  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.key.5a4e409c: {Name:mk0eaba652e54fc0326310c214d334efd837fdd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:48.795306  848599 certs.go:381] copying /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.crt.5a4e409c -> /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.crt
	I1216 10:32:48.795380  848599 certs.go:385] copying /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.key.5a4e409c -> /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.key
	I1216 10:32:48.795425  848599 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.key
	I1216 10:32:48.795441  848599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.crt with IP's: []
	I1216 10:32:49.185378  848599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.crt ...
	I1216 10:32:49.185403  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.crt: {Name:mkfb8512dec95af5f7fe9be594be404ecbc3feb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:49.185541  848599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.key ...
	I1216 10:32:49.185553  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.key: {Name:mk1e40828877323680e6bc49b0f353b0f4a8d014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:49.185723  848599 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 10:32:49.185757  848599 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/ca.pem (1082 bytes)
	I1216 10:32:49.185782  848599 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/cert.pem (1123 bytes)
	I1216 10:32:49.185813  848599 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-840384/.minikube/certs/key.pem (1675 bytes)
	I1216 10:32:49.186471  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 10:32:49.208462  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 10:32:49.229862  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 10:32:49.252897  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 10:32:49.272707  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 10:32:49.292707  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 10:32:49.312720  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 10:32:49.332990  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 10:32:49.352947  848599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-840384/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 10:32:49.372714  848599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 10:32:49.387254  848599 ssh_runner.go:195] Run: openssl version
	I1216 10:32:49.391984  848599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 10:32:49.400361  848599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 10:32:49.403124  848599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1216 10:32:49.403185  848599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 10:32:49.409075  848599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 10:32:49.416721  848599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 10:32:49.419513  848599 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 10:32:49.419554  848599 kubeadm.go:392] StartCluster: {Name:addons-109663 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-109663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 10:32:49.419649  848599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 10:32:49.419715  848599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 10:32:49.451626  848599 cri.go:89] found id: ""
	I1216 10:32:49.451684  848599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 10:32:49.459035  848599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 10:32:49.466323  848599 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1216 10:32:49.466368  848599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 10:32:49.473764  848599 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 10:32:49.473786  848599 kubeadm.go:157] found existing configuration files:
	
	I1216 10:32:49.473827  848599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 10:32:49.481337  848599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 10:32:49.481398  848599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 10:32:49.488336  848599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 10:32:49.495564  848599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 10:32:49.495613  848599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 10:32:49.502569  848599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 10:32:49.509725  848599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 10:32:49.509771  848599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 10:32:49.517196  848599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 10:32:49.524832  848599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 10:32:49.524874  848599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 10:32:49.531758  848599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 10:32:49.565735  848599 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1216 10:32:49.565811  848599 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 10:32:49.580542  848599 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1216 10:32:49.580609  848599 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1216 10:32:49.580646  848599 kubeadm.go:310] OS: Linux
	I1216 10:32:49.580689  848599 kubeadm.go:310] CGROUPS_CPU: enabled
	I1216 10:32:49.580778  848599 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1216 10:32:49.580831  848599 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1216 10:32:49.580871  848599 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1216 10:32:49.580933  848599 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1216 10:32:49.581006  848599 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1216 10:32:49.581096  848599 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1216 10:32:49.581174  848599 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1216 10:32:49.581246  848599 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1216 10:32:49.629850  848599 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 10:32:49.630004  848599 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 10:32:49.630169  848599 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 10:32:49.636020  848599 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 10:32:49.639441  848599 out.go:235]   - Generating certificates and keys ...
	I1216 10:32:49.639556  848599 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 10:32:49.639619  848599 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 10:32:49.836390  848599 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 10:32:50.046656  848599 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1216 10:32:50.397346  848599 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1216 10:32:50.460502  848599 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1216 10:32:50.635424  848599 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1216 10:32:50.635586  848599 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-109663 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 10:32:50.820560  848599 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1216 10:32:50.820691  848599 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-109663 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 10:32:51.004936  848599 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 10:32:51.062758  848599 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 10:32:51.170931  848599 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1216 10:32:51.170996  848599 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 10:32:51.335077  848599 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 10:32:51.557386  848599 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 10:32:51.984782  848599 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 10:32:52.326144  848599 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 10:32:52.700266  848599 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 10:32:52.700739  848599 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 10:32:52.703004  848599 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 10:32:52.704967  848599 out.go:235]   - Booting up control plane ...
	I1216 10:32:52.705051  848599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 10:32:52.705133  848599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 10:32:52.705698  848599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 10:32:52.714141  848599 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 10:32:52.718979  848599 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 10:32:52.719037  848599 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 10:32:52.797628  848599 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 10:32:52.797772  848599 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 10:32:53.298364  848599 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.804305ms
	I1216 10:32:53.298473  848599 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 10:32:57.300009  848599 kubeadm.go:310] [api-check] The API server is healthy after 4.001662107s
	I1216 10:32:57.310037  848599 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 10:32:57.319228  848599 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 10:32:57.334100  848599 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 10:32:57.334362  848599 kubeadm.go:310] [mark-control-plane] Marking the node addons-109663 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 10:32:57.340771  848599 kubeadm.go:310] [bootstrap-token] Using token: 2h4i74.yidhy7fpg06tydg2
	I1216 10:32:57.341964  848599 out.go:235]   - Configuring RBAC rules ...
	I1216 10:32:57.342133  848599 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 10:32:57.345049  848599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 10:32:57.350019  848599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 10:32:57.352300  848599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 10:32:57.355264  848599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 10:32:57.357395  848599 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 10:32:57.705816  848599 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 10:32:58.121923  848599 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 10:32:58.704608  848599 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 10:32:58.705594  848599 kubeadm.go:310] 
	I1216 10:32:58.705701  848599 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 10:32:58.705720  848599 kubeadm.go:310] 
	I1216 10:32:58.705826  848599 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 10:32:58.705837  848599 kubeadm.go:310] 
	I1216 10:32:58.705873  848599 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 10:32:58.705959  848599 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 10:32:58.706029  848599 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 10:32:58.706038  848599 kubeadm.go:310] 
	I1216 10:32:58.706098  848599 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 10:32:58.706107  848599 kubeadm.go:310] 
	I1216 10:32:58.706168  848599 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 10:32:58.706183  848599 kubeadm.go:310] 
	I1216 10:32:58.706227  848599 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 10:32:58.706298  848599 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 10:32:58.706360  848599 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 10:32:58.706369  848599 kubeadm.go:310] 
	I1216 10:32:58.706437  848599 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 10:32:58.706507  848599 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 10:32:58.706514  848599 kubeadm.go:310] 
	I1216 10:32:58.706586  848599 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2h4i74.yidhy7fpg06tydg2 \
	I1216 10:32:58.706682  848599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e6a6471f4384e10723e2292fb8d114ab4ea25aa738d7f29c5187bb98e939b6b4 \
	I1216 10:32:58.706706  848599 kubeadm.go:310] 	--control-plane 
	I1216 10:32:58.706718  848599 kubeadm.go:310] 
	I1216 10:32:58.706818  848599 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 10:32:58.706831  848599 kubeadm.go:310] 
	I1216 10:32:58.706927  848599 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2h4i74.yidhy7fpg06tydg2 \
	I1216 10:32:58.707051  848599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e6a6471f4384e10723e2292fb8d114ab4ea25aa738d7f29c5187bb98e939b6b4 
	I1216 10:32:58.709379  848599 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1216 10:32:58.709495  848599 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 10:32:58.709516  848599 cni.go:84] Creating CNI manager for ""
	I1216 10:32:58.709524  848599 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 10:32:58.711000  848599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1216 10:32:58.712127  848599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 10:32:58.715765  848599 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1216 10:32:58.715784  848599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 10:32:58.731953  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 10:32:58.917087  848599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 10:32:58.917200  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:58.917234  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-109663 minikube.k8s.io/updated_at=2024_12_16T10_32_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8 minikube.k8s.io/name=addons-109663 minikube.k8s.io/primary=true
	I1216 10:32:58.924437  848599 ops.go:34] apiserver oom_adj: -16
	I1216 10:32:58.985975  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:59.486347  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:59.986971  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:00.486306  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:00.986656  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:01.486381  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:01.986792  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:02.486214  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:02.986029  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:03.486249  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:03.986520  848599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:33:04.087749  848599 kubeadm.go:1113] duration metric: took 5.170647368s to wait for elevateKubeSystemPrivileges
	I1216 10:33:04.087800  848599 kubeadm.go:394] duration metric: took 14.668249445s to StartCluster
	I1216 10:33:04.087826  848599 settings.go:142] acquiring lock: {Name:mk06b7df26b8c35e37c6f668a6089af3b5005238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:33:04.087950  848599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20107-840384/kubeconfig
	I1216 10:33:04.088601  848599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-840384/kubeconfig: {Name:mkf0f71705623f4096af1601d96997d88188e951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:33:04.088814  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 10:33:04.088833  848599 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 10:33:04.088909  848599 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 10:33:04.089044  848599 addons.go:69] Setting yakd=true in profile "addons-109663"
	I1216 10:33:04.089083  848599 addons.go:234] Setting addon yakd=true in "addons-109663"
	I1216 10:33:04.089098  848599 addons.go:69] Setting inspektor-gadget=true in profile "addons-109663"
	I1216 10:33:04.089104  848599 config.go:182] Loaded profile config "addons-109663": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:33:04.089121  848599 addons.go:234] Setting addon inspektor-gadget=true in "addons-109663"
	I1216 10:33:04.089133  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.089128  848599 addons.go:69] Setting default-storageclass=true in profile "addons-109663"
	I1216 10:33:04.089145  848599 addons.go:69] Setting cloud-spanner=true in profile "addons-109663"
	I1216 10:33:04.089171  848599 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-109663"
	I1216 10:33:04.089173  848599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-109663"
	I1216 10:33:04.089187  848599 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-109663"
	I1216 10:33:04.089194  848599 addons.go:69] Setting ingress=true in profile "addons-109663"
	I1216 10:33:04.089197  848599 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-109663"
	I1216 10:33:04.089213  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.089219  848599 addons.go:234] Setting addon ingress=true in "addons-109663"
	I1216 10:33:04.089236  848599 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-109663"
	I1216 10:33:04.089256  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.089270  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.089587  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.089704  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.089738  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.089751  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.089756  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.089986  848599 addons.go:69] Setting ingress-dns=true in profile "addons-109663"
	I1216 10:33:04.090008  848599 addons.go:234] Setting addon ingress-dns=true in "addons-109663"
	I1216 10:33:04.090019  848599 addons.go:69] Setting storage-provisioner=true in profile "addons-109663"
	I1216 10:33:04.090042  848599 addons.go:234] Setting addon storage-provisioner=true in "addons-109663"
	I1216 10:33:04.090056  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.090072  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.090107  848599 addons.go:69] Setting volcano=true in profile "addons-109663"
	I1216 10:33:04.090146  848599 addons.go:234] Setting addon volcano=true in "addons-109663"
	I1216 10:33:04.090170  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.090589  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.090631  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.090645  848599 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-109663"
	I1216 10:33:04.090662  848599 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-109663"
	I1216 10:33:04.090912  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.091232  848599 out.go:177] * Verifying Kubernetes components...
	I1216 10:33:04.089187  848599 addons.go:234] Setting addon cloud-spanner=true in "addons-109663"
	I1216 10:33:04.091364  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.089180  848599 addons.go:69] Setting gcp-auth=true in profile "addons-109663"
	I1216 10:33:04.091483  848599 mustload.go:65] Loading cluster: addons-109663
	I1216 10:33:04.091537  848599 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-109663"
	I1216 10:33:04.091591  848599 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-109663"
	I1216 10:33:04.091635  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.091761  848599 config.go:182] Loaded profile config "addons-109663": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:33:04.091927  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.092030  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.092131  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.095721  848599 addons.go:69] Setting metrics-server=true in profile "addons-109663"
	I1216 10:33:04.095747  848599 addons.go:234] Setting addon metrics-server=true in "addons-109663"
	I1216 10:33:04.095746  848599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 10:33:04.095777  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.095868  848599 addons.go:69] Setting registry=true in profile "addons-109663"
	I1216 10:33:04.091362  848599 addons.go:69] Setting volumesnapshots=true in profile "addons-109663"
	I1216 10:33:04.089169  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.095956  848599 addons.go:234] Setting addon registry=true in "addons-109663"
	I1216 10:33:04.095996  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.096414  848599 addons.go:234] Setting addon volumesnapshots=true in "addons-109663"
	I1216 10:33:04.096445  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.096452  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.096755  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.096949  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.097026  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.090631  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.130675  848599 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 10:33:04.132180  848599 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 10:33:04.132204  848599 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 10:33:04.132270  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.146511  848599 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1216 10:33:04.146594  848599 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 10:33:04.147723  848599 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 10:33:04.147747  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 10:33:04.147825  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.149145  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 10:33:04.150251  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 10:33:04.150478  848599 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-109663"
	I1216 10:33:04.150545  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.150630  848599 out.go:177]   - Using image docker.io/registry:2.8.3
	I1216 10:33:04.151011  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.152794  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 10:33:04.152887  848599 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1216 10:33:04.154036  848599 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1216 10:33:04.154055  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 10:33:04.154111  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.154856  848599 addons.go:234] Setting addon default-storageclass=true in "addons-109663"
	I1216 10:33:04.154903  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.155351  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:04.155923  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 10:33:04.156321  848599 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 10:33:04.156342  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 10:33:04.156387  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.158101  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 10:33:04.159206  848599 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 10:33:04.160412  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 10:33:04.161739  848599 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 10:33:04.162741  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 10:33:04.163984  848599 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1216 10:33:04.164963  848599 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1216 10:33:04.165572  848599 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 10:33:04.165607  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 10:33:04.165667  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.167879  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 10:33:04.170258  848599 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 10:33:04.170302  848599 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	W1216 10:33:04.170310  848599 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 10:33:04.170277  848599 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1216 10:33:04.170471  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.172519  848599 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1216 10:33:04.172537  848599 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1216 10:33:04.172605  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.175569  848599 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 10:33:04.175587  848599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 10:33:04.175660  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.210930  848599 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 10:33:04.212499  848599 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 10:33:04.212530  848599 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 10:33:04.212615  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.235651  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.235699  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.236358  848599 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 10:33:04.236380  848599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 10:33:04.236442  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.236549  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.236651  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:04.240224  848599 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1216 10:33:04.240241  848599 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 10:33:04.240414  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.240883  848599 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1216 10:33:04.241768  848599 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 10:33:04.241792  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1216 10:33:04.241845  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.242344  848599 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 10:33:04.242364  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 10:33:04.242425  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.243294  848599 out.go:177]   - Using image docker.io/busybox:stable
	I1216 10:33:04.244353  848599 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 10:33:04.244382  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 10:33:04.244429  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.244885  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.245310  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.246878  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.246930  848599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 10:33:04.248030  848599 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 10:33:04.248051  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 10:33:04.248105  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:04.263238  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.271380  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.271660  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.272475  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	W1216 10:33:04.279597  848599 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 10:33:04.279627  848599 retry.go:31] will retry after 144.207623ms: ssh: handshake failed: EOF
	I1216 10:33:04.288206  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.296070  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 10:33:04.296458  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:04.296707  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	W1216 10:33:04.297011  848599 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1216 10:33:04.297031  848599 retry.go:31] will retry after 279.355591ms: ssh: handshake failed: EOF
	I1216 10:33:04.494587  848599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 10:33:04.589727  848599 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 10:33:04.589761  848599 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 10:33:04.673229  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 10:33:04.679498  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 10:33:04.774125  848599 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 10:33:04.774165  848599 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 10:33:04.777542  848599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 10:33:04.777581  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 10:33:04.783458  848599 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 10:33:04.783503  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 10:33:04.794089  848599 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 10:33:04.794123  848599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 10:33:04.794663  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 10:33:04.873178  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 10:33:04.879090  848599 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 10:33:04.879113  848599 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 10:33:04.881997  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 10:33:04.885458  848599 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 10:33:04.885483  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1216 10:33:04.892271  848599 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 10:33:04.892293  848599 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 10:33:04.895905  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 10:33:04.972830  848599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 10:33:04.972868  848599 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 10:33:04.973550  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 10:33:04.979372  848599 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 10:33:04.979401  848599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 10:33:05.074625  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 10:33:05.076930  848599 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 10:33:05.076956  848599 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 10:33:05.186319  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 10:33:05.188659  848599 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 10:33:05.188699  848599 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 10:33:05.191024  848599 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 10:33:05.191062  848599 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 10:33:05.289209  848599 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 10:33:05.289291  848599 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 10:33:05.290410  848599 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 10:33:05.290476  848599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 10:33:05.372799  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 10:33:05.384174  848599 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 10:33:05.384201  848599 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 10:33:05.592500  848599 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 10:33:05.592549  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 10:33:05.773126  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 10:33:05.879400  848599 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 10:33:05.879511  848599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 10:33:05.888426  848599 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 10:33:05.888459  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 10:33:06.077724  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 10:33:06.178196  848599 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.882086183s)
	I1216 10:33:06.178247  848599 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1216 10:33:06.179598  848599 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.684975957s)
	I1216 10:33:06.180502  848599 node_ready.go:35] waiting up to 6m0s for node "addons-109663" to be "Ready" ...
	I1216 10:33:06.373531  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 10:33:06.389254  848599 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 10:33:06.389294  848599 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 10:33:06.898854  848599 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-109663" context rescaled to 1 replicas
	I1216 10:33:06.985246  848599 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 10:33:06.985287  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 10:33:07.274010  848599 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 10:33:07.274100  848599 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 10:33:07.472478  848599 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 10:33:07.472515  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 10:33:07.687775  848599 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 10:33:07.687864  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 10:33:07.974316  848599 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 10:33:07.974399  848599 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 10:33:08.274679  848599 node_ready.go:53] node "addons-109663" has status "Ready":"False"
	I1216 10:33:08.287244  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 10:33:09.273222  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.599944933s)
	I1216 10:33:10.480914  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.801378523s)
	I1216 10:33:10.480957  848599 addons.go:475] Verifying addon ingress=true in "addons-109663"
	I1216 10:33:10.481001  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.686304837s)
	I1216 10:33:10.481109  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.607900487s)
	I1216 10:33:10.481203  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.599182204s)
	I1216 10:33:10.481438  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.585508598s)
	I1216 10:33:10.481459  848599 addons.go:475] Verifying addon registry=true in "addons-109663"
	I1216 10:33:10.481883  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.109009356s)
	I1216 10:33:10.481668  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.508095231s)
	I1216 10:33:10.481710  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.407059386s)
	I1216 10:33:10.481778  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.295416881s)
	I1216 10:33:10.481971  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.708752789s)
	I1216 10:33:10.481991  848599 addons.go:475] Verifying addon metrics-server=true in "addons-109663"
	I1216 10:33:10.482371  848599 out.go:177] * Verifying ingress addon...
	I1216 10:33:10.483212  848599 out.go:177] * Verifying registry addon...
	I1216 10:33:10.484880  848599 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 10:33:10.485888  848599 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 10:33:10.491352  848599 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 10:33:10.491375  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:10.492370  848599 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 10:33:10.492394  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1216 10:33:10.497556  848599 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1216 10:33:10.685455  848599 node_ready.go:53] node "addons-109663" has status "Ready":"False"
	I1216 10:33:10.989291  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:10.990123  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:11.410531  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.332755094s)
	W1216 10:33:11.410572  848599 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 10:33:11.410594  848599 retry.go:31] will retry after 146.951232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 10:33:11.410618  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.037021705s)
	I1216 10:33:11.412521  848599 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-109663 service yakd-dashboard -n yakd-dashboard
	
	I1216 10:33:11.476073  848599 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 10:33:11.476207  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:11.488433  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:11.488982  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:11.496693  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:11.558632  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 10:33:11.682055  848599 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 10:33:11.773243  848599 addons.go:234] Setting addon gcp-auth=true in "addons-109663"
	I1216 10:33:11.773316  848599 host.go:66] Checking if "addons-109663" exists ...
	I1216 10:33:11.773724  848599 cli_runner.go:164] Run: docker container inspect addons-109663 --format={{.State.Status}}
	I1216 10:33:11.794622  848599 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 10:33:11.794702  848599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-109663
	I1216 10:33:11.815157  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.527792528s)
	I1216 10:33:11.815205  848599 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-109663"
	I1216 10:33:11.816541  848599 out.go:177] * Verifying csi-hostpath-driver addon...
	I1216 10:33:11.817906  848599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/addons-109663/id_rsa Username:docker}
	I1216 10:33:11.818272  848599 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 10:33:11.876736  848599 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 10:33:11.876761  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:11.988859  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:11.988992  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:12.321133  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:12.488213  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:12.488670  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:12.820552  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:12.988673  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:12.988814  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:13.183539  848599 node_ready.go:53] node "addons-109663" has status "Ready":"False"
	I1216 10:33:13.321278  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:13.488254  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:13.488540  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:13.821374  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:13.988232  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:13.988468  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:14.319445  848599 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.760767877s)
	I1216 10:33:14.319526  848599 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.524872285s)
	I1216 10:33:14.321240  848599 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 10:33:14.321724  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:14.323613  848599 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 10:33:14.324764  848599 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 10:33:14.324784  848599 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 10:33:14.342010  848599 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 10:33:14.342031  848599 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 10:33:14.358225  848599 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 10:33:14.358242  848599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 10:33:14.373588  848599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 10:33:14.489146  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:14.489195  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:14.688535  848599 addons.go:475] Verifying addon gcp-auth=true in "addons-109663"
	I1216 10:33:14.689726  848599 out.go:177] * Verifying gcp-auth addon...
	I1216 10:33:14.691774  848599 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 10:33:14.693794  848599 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 10:33:14.693814  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:14.821373  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:14.988073  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:14.988401  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:15.183828  848599 node_ready.go:53] node "addons-109663" has status "Ready":"False"
	I1216 10:33:15.195110  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:15.321272  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:15.487994  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:15.488083  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:15.693975  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:15.821122  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:15.988405  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:15.989637  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:16.194046  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:16.321171  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:16.487823  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:16.488160  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:16.693727  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:16.821230  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:16.988099  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:16.988112  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:17.194466  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:17.320574  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:17.488270  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:17.488459  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:17.683559  848599 node_ready.go:53] node "addons-109663" has status "Ready":"False"
	I1216 10:33:17.694310  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:17.821698  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:17.988850  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:17.989287  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:18.194610  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:18.320610  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:18.488098  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:18.488541  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:18.694691  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:18.820791  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:18.988425  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:18.988881  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:19.194979  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:19.321154  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:19.488238  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:19.488592  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:19.694150  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:19.821326  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:19.988036  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:19.988321  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:20.182762  848599 node_ready.go:53] node "addons-109663" has status "Ready":"False"
	I1216 10:33:20.194584  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:20.322587  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:20.488030  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:20.488439  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:20.694318  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:20.821269  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:20.987972  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:20.988640  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:21.195178  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:21.321555  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:21.488262  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:21.488509  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:21.694792  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:21.821025  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:21.988166  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:21.988808  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:22.183545  848599 node_ready.go:49] node "addons-109663" has status "Ready":"True"
	I1216 10:33:22.183575  848599 node_ready.go:38] duration metric: took 16.003041871s for node "addons-109663" to be "Ready" ...
	I1216 10:33:22.183591  848599 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 10:33:22.194312  848599 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-nhj8x" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:22.197522  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:22.322250  848599 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 10:33:22.322336  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:22.490226  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:22.490635  848599 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 10:33:22.490660  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:22.696479  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:22.824283  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:22.991433  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:22.992361  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:23.195362  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:23.322483  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:23.489322  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:23.489683  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:23.695267  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:23.822532  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:23.989726  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:23.990631  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:24.195008  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:24.198627  848599 pod_ready.go:103] pod "amd-gpu-device-plugin-nhj8x" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:24.321616  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:24.489636  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:24.489791  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:24.695007  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:24.822843  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:24.988844  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:24.989122  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:25.195436  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:25.323096  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:25.488740  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:25.489022  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:25.694617  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:25.698720  848599 pod_ready.go:93] pod "amd-gpu-device-plugin-nhj8x" in "kube-system" namespace has status "Ready":"True"
	I1216 10:33:25.698739  848599 pod_ready.go:82] duration metric: took 3.504402288s for pod "amd-gpu-device-plugin-nhj8x" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.698748  848599 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ksv2k" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.702610  848599 pod_ready.go:93] pod "coredns-7c65d6cfc9-ksv2k" in "kube-system" namespace has status "Ready":"True"
	I1216 10:33:25.702627  848599 pod_ready.go:82] duration metric: took 3.872629ms for pod "coredns-7c65d6cfc9-ksv2k" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.702644  848599 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.706224  848599 pod_ready.go:93] pod "etcd-addons-109663" in "kube-system" namespace has status "Ready":"True"
	I1216 10:33:25.706253  848599 pod_ready.go:82] duration metric: took 3.589378ms for pod "etcd-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.706269  848599 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.709711  848599 pod_ready.go:93] pod "kube-apiserver-addons-109663" in "kube-system" namespace has status "Ready":"True"
	I1216 10:33:25.709728  848599 pod_ready.go:82] duration metric: took 3.450709ms for pod "kube-apiserver-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.709736  848599 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.713265  848599 pod_ready.go:93] pod "kube-controller-manager-addons-109663" in "kube-system" namespace has status "Ready":"True"
	I1216 10:33:25.713281  848599 pod_ready.go:82] duration metric: took 3.538224ms for pod "kube-controller-manager-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.713292  848599 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dw2js" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:25.822420  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:25.989042  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:25.989188  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:26.097285  848599 pod_ready.go:93] pod "kube-proxy-dw2js" in "kube-system" namespace has status "Ready":"True"
	I1216 10:33:26.097307  848599 pod_ready.go:82] duration metric: took 384.009465ms for pod "kube-proxy-dw2js" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:26.097317  848599 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:26.194937  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:26.322961  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:26.489581  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:26.489586  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:26.497405  848599 pod_ready.go:93] pod "kube-scheduler-addons-109663" in "kube-system" namespace has status "Ready":"True"
	I1216 10:33:26.497429  848599 pod_ready.go:82] duration metric: took 400.104712ms for pod "kube-scheduler-addons-109663" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:26.497442  848599 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace to be "Ready" ...
	I1216 10:33:26.696384  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:26.823165  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:26.989901  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:26.990164  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:27.195958  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:27.322795  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:27.489994  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:27.490526  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:27.695991  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:27.822606  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:27.989159  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:27.989525  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:28.195007  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:28.323133  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:28.489488  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:28.489846  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:28.502566  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:28.695362  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:28.823297  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:28.989059  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:28.989279  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:29.194687  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:29.375493  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:29.489995  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:29.493278  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:29.695153  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:29.823826  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:29.991148  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:29.991714  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:30.195680  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:30.322033  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:30.489421  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:30.489469  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:30.694988  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:30.876318  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:30.989582  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:30.990008  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:31.003003  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:31.195966  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:31.323096  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:31.489187  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:31.489815  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:31.695583  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:31.822994  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:31.988983  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:31.989237  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:32.195592  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:32.323277  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:32.488903  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:32.489392  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:32.696506  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:32.823243  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:32.988885  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:32.989148  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:33.196341  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:33.322674  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:33.488634  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:33.488667  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:33.502086  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:33.693815  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:33.821559  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:33.988429  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:33.988769  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:34.194152  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:34.325349  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:34.488396  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:34.488714  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:34.694574  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:34.876109  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:34.990189  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:34.990724  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:35.195194  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:35.375444  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:35.492191  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:35.493688  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:35.502341  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:35.695132  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:35.876051  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:35.990210  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:35.993798  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:36.195051  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:36.322207  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:36.489307  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:36.489410  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:36.696303  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:36.823455  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:36.989688  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:36.989711  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:37.195654  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:37.323150  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:37.489519  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:37.489577  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:37.502620  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:37.695819  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:37.823526  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:37.989517  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:37.989692  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:38.195490  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:38.323913  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:38.489512  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:38.489639  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:38.695009  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:38.823240  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:38.989637  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:38.989966  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:39.195863  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:39.322922  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:39.489532  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:39.489839  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:39.502745  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:39.695567  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:39.822774  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:39.989533  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:39.989845  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:40.195641  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:40.376146  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:40.490514  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:40.490606  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:40.696062  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:40.875447  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:40.989883  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:40.990067  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:41.196047  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:41.324554  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:41.489061  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:41.489773  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:41.502805  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:41.695780  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:41.823285  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:41.989357  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:41.989524  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:42.195514  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:42.323286  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:42.489536  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:42.489650  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:42.695923  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:42.823083  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:42.989403  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:42.989743  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:43.195428  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:43.374868  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:43.489367  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:43.489663  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:43.503985  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:43.696012  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:43.822523  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:43.989041  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:43.989507  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:44.196437  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:44.322827  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:44.489009  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:44.489985  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:44.695640  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:44.823416  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:44.989302  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:44.989712  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:45.194931  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:45.322806  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:45.489197  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:45.489242  848599 kapi.go:107] duration metric: took 35.003353773s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 10:33:45.694246  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:45.822395  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:45.988886  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:46.003221  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:46.194491  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:46.322813  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:46.488662  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:46.694552  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:46.822549  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:46.989251  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:47.195148  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:47.322802  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:47.490015  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:47.694873  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:47.823284  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:47.989150  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:48.084009  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:48.195599  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:48.376370  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:48.489896  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:48.696319  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:48.876501  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:48.992663  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:49.195582  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:49.375953  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:49.490297  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:49.695610  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:49.823687  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:49.989632  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:50.195374  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:50.323555  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:50.489368  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:50.503787  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:50.695450  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:50.823129  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:50.988637  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:51.195270  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:51.323080  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:51.508782  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:51.695217  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:51.823494  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:51.989770  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:52.195617  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:52.321818  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:52.489915  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:52.696638  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:52.826492  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:52.988736  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:53.003486  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:53.195901  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:53.323388  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:53.490222  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:53.695771  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:53.824714  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:53.989114  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:54.195596  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:54.323421  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:54.488781  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:54.694998  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:54.822746  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:54.989975  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:55.195250  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:55.323393  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:55.489828  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:55.502636  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:55.695371  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:55.823548  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:55.988753  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:56.195311  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:56.322360  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:56.488475  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:56.695360  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:56.823160  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:56.988763  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:57.195557  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:57.323228  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:57.488760  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:57.502986  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:33:57.695786  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:57.822469  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:57.989745  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:58.194764  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:58.322146  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:58.489249  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:58.695078  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:58.822330  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:58.988720  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:59.195576  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:59.323259  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:59.490215  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:59.696072  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:59.823107  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:00.010677  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:00.012028  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:00.194618  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:34:00.322187  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:00.488296  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:00.695347  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:34:00.822633  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:00.989168  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:01.194868  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:34:01.322119  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:01.489010  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:01.695366  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:34:01.823165  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:01.988799  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:02.194750  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:34:02.322202  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:02.488979  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:02.502416  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:02.695830  848599 kapi.go:107] duration metric: took 48.004052123s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 10:34:02.697415  848599 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-109663 cluster.
	I1216 10:34:02.698542  848599 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 10:34:02.699693  848599 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 10:34:02.874592  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:02.989784  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:03.322719  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:03.489551  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:03.823450  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:03.989440  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:04.324087  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:04.489445  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:04.502644  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:04.822353  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:04.989069  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:05.323880  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:05.512296  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:05.875523  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:05.990166  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:06.397131  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:06.489855  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:06.574453  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:06.877071  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:06.989900  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:07.376293  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:07.494377  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:07.878721  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:07.988468  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:08.323271  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:08.489196  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:08.823385  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:08.988831  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:09.003062  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:09.323192  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:09.489553  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:09.822684  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:09.989937  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:10.323395  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:10.489825  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:10.823045  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:10.988842  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:11.003275  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:11.322851  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:11.489655  848599 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:34:11.823870  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:11.989881  848599 kapi.go:107] duration metric: took 1m1.505001565s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 10:34:12.322605  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:12.876290  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:13.003959  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:13.324089  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:13.823333  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:14.322846  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:14.822531  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:15.323702  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:15.503382  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:15.822069  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:16.322918  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:16.822182  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:17.322594  848599 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:34:17.823222  848599 kapi.go:107] duration metric: took 1m6.004947421s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 10:34:17.824726  848599 out.go:177] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, cloud-spanner, inspektor-gadget, ingress-dns, nvidia-device-plugin, metrics-server, default-storageclass, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1216 10:34:17.825819  848599 addons.go:510] duration metric: took 1m13.736912479s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin cloud-spanner inspektor-gadget ingress-dns nvidia-device-plugin metrics-server default-storageclass yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1216 10:34:18.003373  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:20.502137  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:22.502669  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:24.503206  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:26.503344  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:28.620772  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:31.003959  848599 pod_ready.go:103] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"False"
	I1216 10:34:33.002781  848599 pod_ready.go:93] pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace has status "Ready":"True"
	I1216 10:34:33.002804  848599 pod_ready.go:82] duration metric: took 1m6.505353818s for pod "metrics-server-84c5f94fbc-z8rzz" in "kube-system" namespace to be "Ready" ...
	I1216 10:34:33.002816  848599 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-k4znm" in "kube-system" namespace to be "Ready" ...
	I1216 10:34:33.007179  848599 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-k4znm" in "kube-system" namespace has status "Ready":"True"
	I1216 10:34:33.007200  848599 pod_ready.go:82] duration metric: took 4.376449ms for pod "nvidia-device-plugin-daemonset-k4znm" in "kube-system" namespace to be "Ready" ...
	I1216 10:34:33.007222  848599 pod_ready.go:39] duration metric: took 1m10.823613152s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 10:34:33.007246  848599 api_server.go:52] waiting for apiserver process to appear ...
	I1216 10:34:33.007317  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 10:34:33.007419  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 10:34:33.041826  848599 cri.go:89] found id: "c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804"
	I1216 10:34:33.041843  848599 cri.go:89] found id: ""
	I1216 10:34:33.041852  848599 logs.go:282] 1 containers: [c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804]
	I1216 10:34:33.041893  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:33.045200  848599 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 10:34:33.045244  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 10:34:33.078373  848599 cri.go:89] found id: "93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6"
	I1216 10:34:33.078390  848599 cri.go:89] found id: ""
	I1216 10:34:33.078398  848599 logs.go:282] 1 containers: [93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6]
	I1216 10:34:33.078432  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:33.081776  848599 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 10:34:33.081822  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 10:34:33.113665  848599 cri.go:89] found id: "d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190"
	I1216 10:34:33.113684  848599 cri.go:89] found id: ""
	I1216 10:34:33.113692  848599 logs.go:282] 1 containers: [d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190]
	I1216 10:34:33.113726  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:33.116711  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 10:34:33.116773  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 10:34:33.149097  848599 cri.go:89] found id: "c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53"
	I1216 10:34:33.149116  848599 cri.go:89] found id: ""
	I1216 10:34:33.149129  848599 logs.go:282] 1 containers: [c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53]
	I1216 10:34:33.149163  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:33.152109  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 10:34:33.152155  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 10:34:33.182865  848599 cri.go:89] found id: "c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc"
	I1216 10:34:33.182884  848599 cri.go:89] found id: ""
	I1216 10:34:33.182894  848599 logs.go:282] 1 containers: [c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc]
	I1216 10:34:33.182927  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:33.185812  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 10:34:33.185877  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 10:34:33.217210  848599 cri.go:89] found id: "bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50"
	I1216 10:34:33.217232  848599 cri.go:89] found id: ""
	I1216 10:34:33.217244  848599 logs.go:282] 1 containers: [bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50]
	I1216 10:34:33.217278  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:33.220246  848599 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 10:34:33.220314  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 10:34:33.252320  848599 cri.go:89] found id: "9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9"
	I1216 10:34:33.252355  848599 cri.go:89] found id: ""
	I1216 10:34:33.252367  848599 logs.go:282] 1 containers: [9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9]
	I1216 10:34:33.252411  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:33.255333  848599 logs.go:123] Gathering logs for kubelet ...
	I1216 10:34:33.255361  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 10:34:33.334752  848599 logs.go:123] Gathering logs for kube-apiserver [c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804] ...
	I1216 10:34:33.334782  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804"
	I1216 10:34:33.377060  848599 logs.go:123] Gathering logs for kube-proxy [c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc] ...
	I1216 10:34:33.377081  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc"
	I1216 10:34:33.409466  848599 logs.go:123] Gathering logs for kube-controller-manager [bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50] ...
	I1216 10:34:33.409490  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50"
	I1216 10:34:33.463839  848599 logs.go:123] Gathering logs for CRI-O ...
	I1216 10:34:33.463865  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 10:34:33.536966  848599 logs.go:123] Gathering logs for container status ...
	I1216 10:34:33.536995  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 10:34:33.576693  848599 logs.go:123] Gathering logs for dmesg ...
	I1216 10:34:33.576718  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 10:34:33.602210  848599 logs.go:123] Gathering logs for describe nodes ...
	I1216 10:34:33.602235  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 10:34:33.698067  848599 logs.go:123] Gathering logs for etcd [93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6] ...
	I1216 10:34:33.698107  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6"
	I1216 10:34:33.748695  848599 logs.go:123] Gathering logs for coredns [d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190] ...
	I1216 10:34:33.748723  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190"
	I1216 10:34:33.802810  848599 logs.go:123] Gathering logs for kube-scheduler [c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53] ...
	I1216 10:34:33.802846  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53"
	I1216 10:34:33.841795  848599 logs.go:123] Gathering logs for kindnet [9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9] ...
	I1216 10:34:33.841823  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9"
	I1216 10:34:36.373886  848599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 10:34:36.387690  848599 api_server.go:72] duration metric: took 1m32.298817651s to wait for apiserver process to appear ...
	I1216 10:34:36.387721  848599 api_server.go:88] waiting for apiserver healthz status ...
	I1216 10:34:36.387772  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 10:34:36.387841  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 10:34:36.421031  848599 cri.go:89] found id: "c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804"
	I1216 10:34:36.421062  848599 cri.go:89] found id: ""
	I1216 10:34:36.421077  848599 logs.go:282] 1 containers: [c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804]
	I1216 10:34:36.421138  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:36.424373  848599 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 10:34:36.424428  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 10:34:36.456413  848599 cri.go:89] found id: "93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6"
	I1216 10:34:36.456434  848599 cri.go:89] found id: ""
	I1216 10:34:36.456445  848599 logs.go:282] 1 containers: [93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6]
	I1216 10:34:36.456495  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:36.459492  848599 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 10:34:36.459554  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 10:34:36.491350  848599 cri.go:89] found id: "d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190"
	I1216 10:34:36.491370  848599 cri.go:89] found id: ""
	I1216 10:34:36.491379  848599 logs.go:282] 1 containers: [d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190]
	I1216 10:34:36.491420  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:36.494403  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 10:34:36.494454  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 10:34:36.526671  848599 cri.go:89] found id: "c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53"
	I1216 10:34:36.526688  848599 cri.go:89] found id: ""
	I1216 10:34:36.526695  848599 logs.go:282] 1 containers: [c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53]
	I1216 10:34:36.526735  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:36.529636  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 10:34:36.529688  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 10:34:36.563198  848599 cri.go:89] found id: "c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc"
	I1216 10:34:36.563217  848599 cri.go:89] found id: ""
	I1216 10:34:36.563227  848599 logs.go:282] 1 containers: [c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc]
	I1216 10:34:36.563283  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:36.566202  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 10:34:36.566256  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 10:34:36.598334  848599 cri.go:89] found id: "bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50"
	I1216 10:34:36.598353  848599 cri.go:89] found id: ""
	I1216 10:34:36.598361  848599 logs.go:282] 1 containers: [bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50]
	I1216 10:34:36.598413  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:36.601335  848599 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 10:34:36.601404  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 10:34:36.634180  848599 cri.go:89] found id: "9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9"
	I1216 10:34:36.634195  848599 cri.go:89] found id: ""
	I1216 10:34:36.634203  848599 logs.go:282] 1 containers: [9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9]
	I1216 10:34:36.634250  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:36.637167  848599 logs.go:123] Gathering logs for kube-apiserver [c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804] ...
	I1216 10:34:36.637191  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804"
	I1216 10:34:36.680397  848599 logs.go:123] Gathering logs for kube-scheduler [c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53] ...
	I1216 10:34:36.680421  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53"
	I1216 10:34:36.717036  848599 logs.go:123] Gathering logs for kube-controller-manager [bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50] ...
	I1216 10:34:36.717062  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50"
	I1216 10:34:36.771623  848599 logs.go:123] Gathering logs for CRI-O ...
	I1216 10:34:36.771648  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 10:34:36.848400  848599 logs.go:123] Gathering logs for container status ...
	I1216 10:34:36.848426  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 10:34:36.890499  848599 logs.go:123] Gathering logs for describe nodes ...
	I1216 10:34:36.890524  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 10:34:36.984658  848599 logs.go:123] Gathering logs for dmesg ...
	I1216 10:34:36.984683  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 10:34:37.010767  848599 logs.go:123] Gathering logs for etcd [93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6] ...
	I1216 10:34:37.010795  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6"
	I1216 10:34:37.058997  848599 logs.go:123] Gathering logs for coredns [d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190] ...
	I1216 10:34:37.059021  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190"
	I1216 10:34:37.109478  848599 logs.go:123] Gathering logs for kube-proxy [c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc] ...
	I1216 10:34:37.109513  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc"
	I1216 10:34:37.141261  848599 logs.go:123] Gathering logs for kindnet [9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9] ...
	I1216 10:34:37.141284  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9"
	I1216 10:34:37.173779  848599 logs.go:123] Gathering logs for kubelet ...
	I1216 10:34:37.173857  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 10:34:39.755150  848599 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 10:34:39.758789  848599 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 10:34:39.759732  848599 api_server.go:141] control plane version: v1.31.2
	I1216 10:34:39.759759  848599 api_server.go:131] duration metric: took 3.372030509s to wait for apiserver health ...
	I1216 10:34:39.759767  848599 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 10:34:39.759796  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 10:34:39.759850  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 10:34:39.795031  848599 cri.go:89] found id: "c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804"
	I1216 10:34:39.795051  848599 cri.go:89] found id: ""
	I1216 10:34:39.795060  848599 logs.go:282] 1 containers: [c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804]
	I1216 10:34:39.795104  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:39.798358  848599 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 10:34:39.798435  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 10:34:39.831890  848599 cri.go:89] found id: "93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6"
	I1216 10:34:39.831906  848599 cri.go:89] found id: ""
	I1216 10:34:39.831913  848599 logs.go:282] 1 containers: [93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6]
	I1216 10:34:39.831951  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:39.834968  848599 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 10:34:39.835037  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 10:34:39.866579  848599 cri.go:89] found id: "d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190"
	I1216 10:34:39.866602  848599 cri.go:89] found id: ""
	I1216 10:34:39.866613  848599 logs.go:282] 1 containers: [d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190]
	I1216 10:34:39.866647  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:39.869695  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 10:34:39.869763  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 10:34:39.901933  848599 cri.go:89] found id: "c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53"
	I1216 10:34:39.901954  848599 cri.go:89] found id: ""
	I1216 10:34:39.901966  848599 logs.go:282] 1 containers: [c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53]
	I1216 10:34:39.902014  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:39.905112  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 10:34:39.905174  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 10:34:39.938572  848599 cri.go:89] found id: "c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc"
	I1216 10:34:39.938590  848599 cri.go:89] found id: ""
	I1216 10:34:39.938598  848599 logs.go:282] 1 containers: [c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc]
	I1216 10:34:39.938648  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:39.941675  848599 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 10:34:39.941738  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 10:34:39.974011  848599 cri.go:89] found id: "bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50"
	I1216 10:34:39.974033  848599 cri.go:89] found id: ""
	I1216 10:34:39.974043  848599 logs.go:282] 1 containers: [bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50]
	I1216 10:34:39.974092  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:39.977679  848599 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 10:34:39.977725  848599 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 10:34:40.011518  848599 cri.go:89] found id: "9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9"
	I1216 10:34:40.011539  848599 cri.go:89] found id: ""
	I1216 10:34:40.011547  848599 logs.go:282] 1 containers: [9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9]
	I1216 10:34:40.011598  848599 ssh_runner.go:195] Run: which crictl
	I1216 10:34:40.014781  848599 logs.go:123] Gathering logs for kubelet ...
	I1216 10:34:40.014805  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 10:34:40.093024  848599 logs.go:123] Gathering logs for kube-proxy [c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc] ...
	I1216 10:34:40.093048  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc"
	I1216 10:34:40.125022  848599 logs.go:123] Gathering logs for container status ...
	I1216 10:34:40.125045  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 10:34:40.167202  848599 logs.go:123] Gathering logs for etcd [93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6] ...
	I1216 10:34:40.167230  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6"
	I1216 10:34:40.217550  848599 logs.go:123] Gathering logs for coredns [d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190] ...
	I1216 10:34:40.217579  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190"
	I1216 10:34:40.271787  848599 logs.go:123] Gathering logs for kube-scheduler [c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53] ...
	I1216 10:34:40.271829  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53"
	I1216 10:34:40.308809  848599 logs.go:123] Gathering logs for kube-controller-manager [bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50] ...
	I1216 10:34:40.308835  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50"
	I1216 10:34:40.363908  848599 logs.go:123] Gathering logs for kindnet [9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9] ...
	I1216 10:34:40.363934  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9"
	I1216 10:34:40.396438  848599 logs.go:123] Gathering logs for dmesg ...
	I1216 10:34:40.396463  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 10:34:40.424000  848599 logs.go:123] Gathering logs for describe nodes ...
	I1216 10:34:40.424023  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 10:34:40.521844  848599 logs.go:123] Gathering logs for kube-apiserver [c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804] ...
	I1216 10:34:40.521880  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804"
	I1216 10:34:40.565349  848599 logs.go:123] Gathering logs for CRI-O ...
	I1216 10:34:40.565379  848599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 10:34:43.152582  848599 system_pods.go:59] 19 kube-system pods found
	I1216 10:34:43.152628  848599 system_pods.go:61] "amd-gpu-device-plugin-nhj8x" [483a0808-3e15-4de2-b48a-ecfa43394c55] Running
	I1216 10:34:43.152639  848599 system_pods.go:61] "coredns-7c65d6cfc9-ksv2k" [b31289fc-3ff8-4af0-a5d2-a88dace5589c] Running
	I1216 10:34:43.152645  848599 system_pods.go:61] "csi-hostpath-attacher-0" [9089b466-c717-4755-bf51-2740aecfaeb6] Running
	I1216 10:34:43.152650  848599 system_pods.go:61] "csi-hostpath-resizer-0" [963124d9-8e43-4fb9-a011-05c542d2fb50] Running
	I1216 10:34:43.152655  848599 system_pods.go:61] "csi-hostpathplugin-7826x" [856ef16b-5b68-404c-8df4-558dc73fe76b] Running
	I1216 10:34:43.152660  848599 system_pods.go:61] "etcd-addons-109663" [9789d971-2bea-46bf-872e-e096afce5cb0] Running
	I1216 10:34:43.152666  848599 system_pods.go:61] "kindnet-sn2ww" [1c8f1cfd-5f82-439c-b6f7-b654f855b517] Running
	I1216 10:34:43.152672  848599 system_pods.go:61] "kube-apiserver-addons-109663" [4e04829b-d42e-4de8-be6a-0ec8196b7c28] Running
	I1216 10:34:43.152678  848599 system_pods.go:61] "kube-controller-manager-addons-109663" [c5a39a90-0604-42e4-bdc4-d4b9ab6f6df5] Running
	I1216 10:34:43.152687  848599 system_pods.go:61] "kube-ingress-dns-minikube" [a0ba89f2-e8b1-498e-ab03-dd8a5e50c176] Running
	I1216 10:34:43.152694  848599 system_pods.go:61] "kube-proxy-dw2js" [82afbc0e-6ed6-4a7a-8721-d77176570525] Running
	I1216 10:34:43.152703  848599 system_pods.go:61] "kube-scheduler-addons-109663" [018079f5-5c1a-4a2c-8845-8adfc665ce77] Running
	I1216 10:34:43.152709  848599 system_pods.go:61] "metrics-server-84c5f94fbc-z8rzz" [0c4013ee-0e9e-4bf6-aff8-752bb76b1c0c] Running
	I1216 10:34:43.152719  848599 system_pods.go:61] "nvidia-device-plugin-daemonset-k4znm" [94be2280-9ef7-49a1-aed5-ae48c7b50056] Running
	I1216 10:34:43.152725  848599 system_pods.go:61] "registry-5cc95cd69-rkb22" [9148bfd2-bdfd-42f6-9b6e-f2cb29de4e1e] Running
	I1216 10:34:43.152731  848599 system_pods.go:61] "registry-proxy-w5gg9" [5d79e061-c009-4296-adaf-94ec1a94ed36] Running
	I1216 10:34:43.152737  848599 system_pods.go:61] "snapshot-controller-56fcc65765-8skj8" [29ea6b74-8543-4d6d-a9f0-8476aaef7f19] Running
	I1216 10:34:43.152744  848599 system_pods.go:61] "snapshot-controller-56fcc65765-rb9fx" [62bd9cad-e4a7-474c-9ce0-bb38412ded35] Running
	I1216 10:34:43.152752  848599 system_pods.go:61] "storage-provisioner" [f6eecac1-47ca-4d5e-8014-bbb9f35f7213] Running
	I1216 10:34:43.152764  848599 system_pods.go:74] duration metric: took 3.392988839s to wait for pod list to return data ...
	I1216 10:34:43.152779  848599 default_sa.go:34] waiting for default service account to be created ...
	I1216 10:34:43.154908  848599 default_sa.go:45] found service account: "default"
	I1216 10:34:43.154931  848599 default_sa.go:55] duration metric: took 2.143478ms for default service account to be created ...
	I1216 10:34:43.154942  848599 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 10:34:43.164127  848599 system_pods.go:86] 19 kube-system pods found
	I1216 10:34:43.164152  848599 system_pods.go:89] "amd-gpu-device-plugin-nhj8x" [483a0808-3e15-4de2-b48a-ecfa43394c55] Running
	I1216 10:34:43.164158  848599 system_pods.go:89] "coredns-7c65d6cfc9-ksv2k" [b31289fc-3ff8-4af0-a5d2-a88dace5589c] Running
	I1216 10:34:43.164162  848599 system_pods.go:89] "csi-hostpath-attacher-0" [9089b466-c717-4755-bf51-2740aecfaeb6] Running
	I1216 10:34:43.164166  848599 system_pods.go:89] "csi-hostpath-resizer-0" [963124d9-8e43-4fb9-a011-05c542d2fb50] Running
	I1216 10:34:43.164170  848599 system_pods.go:89] "csi-hostpathplugin-7826x" [856ef16b-5b68-404c-8df4-558dc73fe76b] Running
	I1216 10:34:43.164173  848599 system_pods.go:89] "etcd-addons-109663" [9789d971-2bea-46bf-872e-e096afce5cb0] Running
	I1216 10:34:43.164176  848599 system_pods.go:89] "kindnet-sn2ww" [1c8f1cfd-5f82-439c-b6f7-b654f855b517] Running
	I1216 10:34:43.164180  848599 system_pods.go:89] "kube-apiserver-addons-109663" [4e04829b-d42e-4de8-be6a-0ec8196b7c28] Running
	I1216 10:34:43.164184  848599 system_pods.go:89] "kube-controller-manager-addons-109663" [c5a39a90-0604-42e4-bdc4-d4b9ab6f6df5] Running
	I1216 10:34:43.164189  848599 system_pods.go:89] "kube-ingress-dns-minikube" [a0ba89f2-e8b1-498e-ab03-dd8a5e50c176] Running
	I1216 10:34:43.164195  848599 system_pods.go:89] "kube-proxy-dw2js" [82afbc0e-6ed6-4a7a-8721-d77176570525] Running
	I1216 10:34:43.164199  848599 system_pods.go:89] "kube-scheduler-addons-109663" [018079f5-5c1a-4a2c-8845-8adfc665ce77] Running
	I1216 10:34:43.164203  848599 system_pods.go:89] "metrics-server-84c5f94fbc-z8rzz" [0c4013ee-0e9e-4bf6-aff8-752bb76b1c0c] Running
	I1216 10:34:43.164208  848599 system_pods.go:89] "nvidia-device-plugin-daemonset-k4znm" [94be2280-9ef7-49a1-aed5-ae48c7b50056] Running
	I1216 10:34:43.164220  848599 system_pods.go:89] "registry-5cc95cd69-rkb22" [9148bfd2-bdfd-42f6-9b6e-f2cb29de4e1e] Running
	I1216 10:34:43.164223  848599 system_pods.go:89] "registry-proxy-w5gg9" [5d79e061-c009-4296-adaf-94ec1a94ed36] Running
	I1216 10:34:43.164228  848599 system_pods.go:89] "snapshot-controller-56fcc65765-8skj8" [29ea6b74-8543-4d6d-a9f0-8476aaef7f19] Running
	I1216 10:34:43.164234  848599 system_pods.go:89] "snapshot-controller-56fcc65765-rb9fx" [62bd9cad-e4a7-474c-9ce0-bb38412ded35] Running
	I1216 10:34:43.164237  848599 system_pods.go:89] "storage-provisioner" [f6eecac1-47ca-4d5e-8014-bbb9f35f7213] Running
	I1216 10:34:43.164244  848599 system_pods.go:126] duration metric: took 9.295549ms to wait for k8s-apps to be running ...
	I1216 10:34:43.164253  848599 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 10:34:43.164295  848599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 10:34:43.175918  848599 system_svc.go:56] duration metric: took 11.65853ms WaitForService to wait for kubelet
	I1216 10:34:43.175940  848599 kubeadm.go:582] duration metric: took 1m39.087076667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 10:34:43.175962  848599 node_conditions.go:102] verifying NodePressure condition ...
	I1216 10:34:43.178532  848599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1216 10:34:43.178559  848599 node_conditions.go:123] node cpu capacity is 8
	I1216 10:34:43.178575  848599 node_conditions.go:105] duration metric: took 2.605732ms to run NodePressure ...
	I1216 10:34:43.178594  848599 start.go:241] waiting for startup goroutines ...
	I1216 10:34:43.178609  848599 start.go:246] waiting for cluster config update ...
	I1216 10:34:43.178631  848599 start.go:255] writing updated cluster config ...
	I1216 10:34:43.178953  848599 ssh_runner.go:195] Run: rm -f paused
	I1216 10:34:43.230691  848599 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1216 10:34:43.232683  848599 out.go:177] * Done! kubectl is now configured to use "addons-109663" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 10:37:49 addons-109663 crio[1041]: time="2024-12-16 10:37:49.869517555Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5f85ff4588-5q5qg Namespace:ingress-nginx ID:96c18162d52c8179fb07beaebe88d6a303c1ed52953efed169b34b36cec7e3f1 UID:668823c9-e601-4b7a-ba34-1f5eeed69122 NetNS:/var/run/netns/32fd1335-6ba0-4ade-b970-d14cffc15f85 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 16 10:37:49 addons-109663 crio[1041]: time="2024-12-16 10:37:49.869635328Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5f85ff4588-5q5qg from CNI network \"kindnet\" (type=ptp)"
	Dec 16 10:37:49 addons-109663 crio[1041]: time="2024-12-16 10:37:49.904808050Z" level=info msg="Stopped pod sandbox: 96c18162d52c8179fb07beaebe88d6a303c1ed52953efed169b34b36cec7e3f1" id=4c7da553-8bae-4726-9429-790e60d8e284 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 10:37:50 addons-109663 crio[1041]: time="2024-12-16 10:37:50.211779498Z" level=info msg="Removing container: 14ccae418ccad36f12cd3f867192bebc20edc174983d2b73b5d44e414f68a34b" id=6edc453b-037d-488b-b14a-d0c93d5f332c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 10:37:50 addons-109663 crio[1041]: time="2024-12-16 10:37:50.225828193Z" level=info msg="Removed container 14ccae418ccad36f12cd3f867192bebc20edc174983d2b73b5d44e414f68a34b: ingress-nginx/ingress-nginx-controller-5f85ff4588-5q5qg/controller" id=6edc453b-037d-488b-b14a-d0c93d5f332c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.276759076Z" level=info msg="Removing container: 35888134a837cba61505d8d53ae69764b0bf34c9379961282ca7331842714420" id=6e2c1245-646e-45c8-9a5f-e5c3e81d9079 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.289010572Z" level=info msg="Removed container 35888134a837cba61505d8d53ae69764b0bf34c9379961282ca7331842714420: ingress-nginx/ingress-nginx-admission-patch-s5fq7/patch" id=6e2c1245-646e-45c8-9a5f-e5c3e81d9079 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.289961479Z" level=info msg="Removing container: 5665ce1082fa5f554a3dc36324ba4262aa8905bf6dcb735f6b6ff5bc907dccb2" id=af96b335-ab33-4b28-a6b0-171f588c975c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.302627602Z" level=info msg="Removed container 5665ce1082fa5f554a3dc36324ba4262aa8905bf6dcb735f6b6ff5bc907dccb2: ingress-nginx/ingress-nginx-admission-create-287m6/create" id=af96b335-ab33-4b28-a6b0-171f588c975c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.304165168Z" level=info msg="Stopping pod sandbox: 96c18162d52c8179fb07beaebe88d6a303c1ed52953efed169b34b36cec7e3f1" id=3941b3b1-500f-4aaa-aa1c-bd08d8da25d2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.304220446Z" level=info msg="Stopped pod sandbox (already stopped): 96c18162d52c8179fb07beaebe88d6a303c1ed52953efed169b34b36cec7e3f1" id=3941b3b1-500f-4aaa-aa1c-bd08d8da25d2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.304470067Z" level=info msg="Removing pod sandbox: 96c18162d52c8179fb07beaebe88d6a303c1ed52953efed169b34b36cec7e3f1" id=74ac7b20-2337-4bde-a46f-159d202eb47d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.309826164Z" level=info msg="Removed pod sandbox: 96c18162d52c8179fb07beaebe88d6a303c1ed52953efed169b34b36cec7e3f1" id=74ac7b20-2337-4bde-a46f-159d202eb47d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.310109252Z" level=info msg="Stopping pod sandbox: 0b71d4cf0076cf42276e285f26a3e1a315b1f002fb1b210fce8ab58364011e0b" id=0d87f33f-6103-47d2-b7b4-5be86b1ac99f name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.310138589Z" level=info msg="Stopped pod sandbox (already stopped): 0b71d4cf0076cf42276e285f26a3e1a315b1f002fb1b210fce8ab58364011e0b" id=0d87f33f-6103-47d2-b7b4-5be86b1ac99f name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.310325296Z" level=info msg="Removing pod sandbox: 0b71d4cf0076cf42276e285f26a3e1a315b1f002fb1b210fce8ab58364011e0b" id=4f82dd8f-d580-41bc-94e9-be2355154e8d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.316228935Z" level=info msg="Removed pod sandbox: 0b71d4cf0076cf42276e285f26a3e1a315b1f002fb1b210fce8ab58364011e0b" id=4f82dd8f-d580-41bc-94e9-be2355154e8d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.316532065Z" level=info msg="Stopping pod sandbox: 7588d6c1b37267e2ba79247d8df96ec30245c771069273e1f09dad096a6a37fa" id=e967d245-1c6a-4e5a-9d92-8a2d8f3caa7b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.316556074Z" level=info msg="Stopped pod sandbox (already stopped): 7588d6c1b37267e2ba79247d8df96ec30245c771069273e1f09dad096a6a37fa" id=e967d245-1c6a-4e5a-9d92-8a2d8f3caa7b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.316758311Z" level=info msg="Removing pod sandbox: 7588d6c1b37267e2ba79247d8df96ec30245c771069273e1f09dad096a6a37fa" id=63102c8c-aca2-4308-8c5c-166b880c1607 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.321801544Z" level=info msg="Removed pod sandbox: 7588d6c1b37267e2ba79247d8df96ec30245c771069273e1f09dad096a6a37fa" id=63102c8c-aca2-4308-8c5c-166b880c1607 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.322087837Z" level=info msg="Stopping pod sandbox: 282892f631a518ae87de4b4029b5075792694fa6518282d4a70d2b298f17f025" id=b0c48f40-6b3b-4611-98bd-95f8e4f501f8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.322121385Z" level=info msg="Stopped pod sandbox (already stopped): 282892f631a518ae87de4b4029b5075792694fa6518282d4a70d2b298f17f025" id=b0c48f40-6b3b-4611-98bd-95f8e4f501f8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.322357045Z" level=info msg="Removing pod sandbox: 282892f631a518ae87de4b4029b5075792694fa6518282d4a70d2b298f17f025" id=10678570-d655-480f-9423-24356128d12a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 10:37:58 addons-109663 crio[1041]: time="2024-12-16 10:37:58.329192429Z" level=info msg="Removed pod sandbox: 282892f631a518ae87de4b4029b5075792694fa6518282d4a70d2b298f17f025" id=10678570-d655-480f-9423-24356128d12a name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b6ac96fbd2ff7       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   fa2caf81dd3dd       hello-world-app-55bf9c44b4-br7qj
	d384a65188bd6       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         5 minutes ago       Running             nginx                     0                   85a1116f8171d       nginx
	e0df8328f54c0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   cab241fcb05db       busybox
	d244f32e00679       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   99edf650f528e       local-path-provisioner-86d989889c-j9wdv
	2497007677f8c       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   7 minutes ago       Running             metrics-server            0                   f794499539882       metrics-server-84c5f94fbc-z8rzz
	d395437896ee2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   cb969df31de80       coredns-7c65d6cfc9-ksv2k
	cbfe74880d2d7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   f3272849c7731       storage-provisioner
	9a6bfcbfaf469       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3                      7 minutes ago       Running             kindnet-cni               0                   5a7cc27da2525       kindnet-sn2ww
	c1be7640a86c8       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        7 minutes ago       Running             kube-proxy                0                   9f1718b08cd98       kube-proxy-dw2js
	93aca58b0473f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   7d663deb48a89       etcd-addons-109663
	c2d7f9e7ddfbc       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        8 minutes ago       Running             kube-apiserver            0                   52fae8dd5fd08       kube-apiserver-addons-109663
	c7d6c76bcfec7       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        8 minutes ago       Running             kube-scheduler            0                   3bd33f6501e1d       kube-scheduler-addons-109663
	bb5423f27c7f2       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        8 minutes ago       Running             kube-controller-manager   0                   3c1599239d305       kube-controller-manager-addons-109663
	
	
	==> coredns [d395437896ee29deac13e3b40538f4a61f3995dad89fb4205ccef971888d4190] <==
	[INFO] 10.244.0.22:34735 - 8745 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004719446s
	[INFO] 10.244.0.22:58803 - 15972 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004453347s
	[INFO] 10.244.0.22:34735 - 48257 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004449279s
	[INFO] 10.244.0.22:52727 - 50373 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004643483s
	[INFO] 10.244.0.22:37853 - 55884 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004798309s
	[INFO] 10.244.0.22:36020 - 59881 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004569124s
	[INFO] 10.244.0.22:36505 - 58610 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004669965s
	[INFO] 10.244.0.22:42176 - 1045 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004772496s
	[INFO] 10.244.0.22:37874 - 2998 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004946996s
	[INFO] 10.244.0.22:37874 - 45555 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00481269s
	[INFO] 10.244.0.22:36505 - 10050 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004936176s
	[INFO] 10.244.0.22:36020 - 58590 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004383175s
	[INFO] 10.244.0.22:42176 - 44852 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005065884s
	[INFO] 10.244.0.22:58803 - 20636 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005180262s
	[INFO] 10.244.0.22:34735 - 42068 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005202092s
	[INFO] 10.244.0.22:36505 - 63024 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000162661s
	[INFO] 10.244.0.22:52727 - 17931 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005334034s
	[INFO] 10.244.0.22:58803 - 9354 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000077686s
	[INFO] 10.244.0.22:37874 - 12596 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000255526s
	[INFO] 10.244.0.22:36020 - 31586 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000229967s
	[INFO] 10.244.0.22:42176 - 37427 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000210511s
	[INFO] 10.244.0.22:37853 - 37605 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005418945s
	[INFO] 10.244.0.22:34735 - 2026 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000249539s
	[INFO] 10.244.0.22:52727 - 24764 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059571s
	[INFO] 10.244.0.22:37853 - 37660 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057613s
	
	
	==> describe nodes <==
	Name:               addons-109663
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-109663
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8
	                    minikube.k8s.io/name=addons-109663
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T10_32_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-109663
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 10:32:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-109663
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 10:40:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 10:38:03 +0000   Mon, 16 Dec 2024 10:32:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 10:38:03 +0000   Mon, 16 Dec 2024 10:32:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 10:38:03 +0000   Mon, 16 Dec 2024 10:32:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 10:38:03 +0000   Mon, 16 Dec 2024 10:33:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-109663
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 c878448df26f4703bfd4f4644cd4f6ef
	  System UUID:                1d94d62c-1455-428d-baf9-9d8a353f13c2
	  Boot ID:                    9fd10bb4-c61e-4d88-b4b5-bae725bc9632
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  default                     hello-world-app-55bf9c44b4-br7qj           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 coredns-7c65d6cfc9-ksv2k                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m55s
	  kube-system                 etcd-addons-109663                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m
	  kube-system                 kindnet-sn2ww                              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m55s
	  kube-system                 kube-apiserver-addons-109663               250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 kube-controller-manager-addons-109663      200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 kube-proxy-dw2js                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 kube-scheduler-addons-109663               100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 metrics-server-84c5f94fbc-z8rzz            100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         7m49s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  local-path-storage          local-path-provisioner-86d989889c-j9wdv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 7m53s  kube-proxy       
	  Normal   Starting                 8m1s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m1s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m     kubelet          Node addons-109663 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m     kubelet          Node addons-109663 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m     kubelet          Node addons-109663 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m56s  node-controller  Node addons-109663 event: Registered Node addons-109663 in Controller
	  Normal   NodeReady                7m36s  kubelet          Node addons-109663 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de be ee 00 db 5d 08 06
	[  +0.004678] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 d9 73 09 a8 1d 08 06
	[  +8.602351] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 12 ec 78 2d 3c ff 08 06
	[  +0.000321] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 82 7d e3 e9 86 08 06
	[Dec16 09:19] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 d4 08 d7 58 df 08 06
	[  +0.000407] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 82 d9 73 09 a8 1d 08 06
	[Dec16 10:35] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: de d7 2f 49 bb 5f fa 9e 56 e2 a0 0e 08 00
	[  +1.023752] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de d7 2f 49 bb 5f fa 9e 56 e2 a0 0e 08 00
	[  +2.015839] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: de d7 2f 49 bb 5f fa 9e 56 e2 a0 0e 08 00
	[  +4.095632] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de d7 2f 49 bb 5f fa 9e 56 e2 a0 0e 08 00
	[  +8.195350] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de d7 2f 49 bb 5f fa 9e 56 e2 a0 0e 08 00
	[Dec16 10:36] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: de d7 2f 49 bb 5f fa 9e 56 e2 a0 0e 08 00
	[ +33.277339] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000012] ll header: 00000000: de d7 2f 49 bb 5f fa 9e 56 e2 a0 0e 08 00
	
	
	==> etcd [93aca58b0473f5baf584710e9ca182179cce77cab936414e2e85aba16f5ad4b6] <==
	{"level":"info","ts":"2024-12-16T10:33:07.672486Z","caller":"traceutil/trace.go:171","msg":"trace[793751114] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:377; }","duration":"295.672195ms","start":"2024-12-16T10:33:07.376804Z","end":"2024-12-16T10:33:07.672476Z","steps":["trace[793751114] 'agreement among raft nodes before linearized reading'  (duration: 216.741433ms)","trace[793751114] 'range keys from in-memory index tree'  (duration: 78.883366ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-16T10:33:07.672864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.571774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2024-12-16T10:33:07.672892Z","caller":"traceutil/trace.go:171","msg":"trace[1769695805] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:378; }","duration":"186.604327ms","start":"2024-12-16T10:33:07.486278Z","end":"2024-12-16T10:33:07.672883Z","steps":["trace[1769695805] 'agreement among raft nodes before linearized reading'  (duration: 186.519799ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:07.994499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.96805ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128033944884734075 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-d4bw4\" mod_revision:384 > success:<request_delete_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-d4bw4\" > > failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-d4bw4\" > >>","response":"size:18"}
	{"level":"info","ts":"2024-12-16T10:33:08.071909Z","caller":"traceutil/trace.go:171","msg":"trace[99038856] transaction","detail":"{read_only:false; number_of_response:1; response_revision:392; }","duration":"191.844388ms","start":"2024-12-16T10:33:07.880048Z","end":"2024-12-16T10:33:08.071892Z","steps":["trace[99038856] 'compare'  (duration: 112.892348ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:33:08.072065Z","caller":"traceutil/trace.go:171","msg":"trace[1239885436] linearizableReadLoop","detail":"{readStateIndex:407; appliedIndex:406; }","duration":"191.650397ms","start":"2024-12-16T10:33:07.880403Z","end":"2024-12-16T10:33:08.072053Z","steps":["trace[1239885436] 'read index received'  (duration: 833.119µs)","trace[1239885436] 'applied index is now lower than readState.Index'  (duration: 190.816182ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T10:33:08.072223Z","caller":"traceutil/trace.go:171","msg":"trace[738296694] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"191.759501ms","start":"2024-12-16T10:33:07.880456Z","end":"2024-12-16T10:33:08.072215Z","steps":["trace[738296694] 'process raft request'  (duration: 114.121491ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:33:08.072325Z","caller":"traceutil/trace.go:171","msg":"trace[817934492] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"191.792502ms","start":"2024-12-16T10:33:07.880522Z","end":"2024-12-16T10:33:08.072315Z","steps":["trace[817934492] 'process raft request'  (duration: 114.113915ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:33:08.072437Z","caller":"traceutil/trace.go:171","msg":"trace[960474592] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"191.735918ms","start":"2024-12-16T10:33:07.880693Z","end":"2024-12-16T10:33:08.072429Z","steps":["trace[960474592] 'process raft request'  (duration: 113.973168ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:08.072677Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.261081ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T10:33:08.072709Z","caller":"traceutil/trace.go:171","msg":"trace[1404446973] range","detail":"{range_begin:/registry/services/specs/kube-system/registry; range_end:; response_count:0; response_revision:401; }","duration":"192.300009ms","start":"2024-12-16T10:33:07.880400Z","end":"2024-12-16T10:33:08.072700Z","steps":["trace[1404446973] 'agreement among raft nodes before linearized reading'  (duration: 192.239715ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:08.072859Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.039823ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:260"}
	{"level":"info","ts":"2024-12-16T10:33:08.072886Z","caller":"traceutil/trace.go:171","msg":"trace[381039783] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:401; }","duration":"192.072803ms","start":"2024-12-16T10:33:07.880806Z","end":"2024-12-16T10:33:08.072879Z","steps":["trace[381039783] 'agreement among raft nodes before linearized reading'  (duration: 192.013363ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:08.381071Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.234713ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T10:33:08.381168Z","caller":"traceutil/trace.go:171","msg":"trace[1952366159] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:0; response_revision:425; }","duration":"100.32411ms","start":"2024-12-16T10:33:08.280818Z","end":"2024-12-16T10:33:08.381142Z","steps":["trace[1952366159] 'agreement among raft nodes before linearized reading'  (duration: 100.095009ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:08.485917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.755156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" ","response":"range_response_count:1 size:3350"}
	{"level":"info","ts":"2024-12-16T10:33:08.486052Z","caller":"traceutil/trace.go:171","msg":"trace[816096498] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:426; }","duration":"199.895603ms","start":"2024-12-16T10:33:08.286139Z","end":"2024-12-16T10:33:08.486035Z","steps":["trace[816096498] 'agreement among raft nodes before linearized reading'  (duration: 95.906549ms)","trace[816096498] 'range keys from in-memory index tree'  (duration: 92.502138ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T10:33:08.486128Z","caller":"traceutil/trace.go:171","msg":"trace[641752804] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"103.967584ms","start":"2024-12-16T10:33:08.382149Z","end":"2024-12-16T10:33:08.486117Z","steps":["trace[641752804] 'process raft request'  (duration: 91.625966ms)","trace[641752804] 'compare'  (duration: 11.954162ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T10:34:05.693646Z","caller":"traceutil/trace.go:171","msg":"trace[1539685743] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"105.470497ms","start":"2024-12-16T10:34:05.588152Z","end":"2024-12-16T10:34:05.693622Z","steps":["trace[1539685743] 'process raft request'  (duration: 88.430195ms)","trace[1539685743] 'compare'  (duration: 16.935916ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T10:34:28.616542Z","caller":"traceutil/trace.go:171","msg":"trace[1208903567] linearizableReadLoop","detail":"{readStateIndex:1235; appliedIndex:1234; }","duration":"116.984843ms","start":"2024-12-16T10:34:28.499541Z","end":"2024-12-16T10:34:28.616526Z","steps":["trace[1208903567] 'read index received'  (duration: 54.381885ms)","trace[1208903567] 'applied index is now lower than readState.Index'  (duration: 62.602481ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T10:34:28.616682Z","caller":"traceutil/trace.go:171","msg":"trace[608077655] transaction","detail":"{read_only:false; response_revision:1197; number_of_response:1; }","duration":"197.753384ms","start":"2024-12-16T10:34:28.418905Z","end":"2024-12-16T10:34:28.616659Z","steps":["trace[608077655] 'process raft request'  (duration: 135.083294ms)","trace[608077655] 'compare'  (duration: 62.448167ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-16T10:34:28.616727Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.057461ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"warn","ts":"2024-12-16T10:34:28.616733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.167393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-z8rzz\" ","response":"range_response_count:1 size:4862"}
	{"level":"info","ts":"2024-12-16T10:34:28.616768Z","caller":"traceutil/trace.go:171","msg":"trace[312161016] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1197; }","duration":"117.108854ms","start":"2024-12-16T10:34:28.499643Z","end":"2024-12-16T10:34:28.616752Z","steps":["trace[312161016] 'agreement among raft nodes before linearized reading'  (duration: 116.977084ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:34:28.616774Z","caller":"traceutil/trace.go:171","msg":"trace[1404395514] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-z8rzz; range_end:; response_count:1; response_revision:1197; }","duration":"117.232126ms","start":"2024-12-16T10:34:28.499532Z","end":"2024-12-16T10:34:28.616764Z","steps":["trace[1404395514] 'agreement among raft nodes before linearized reading'  (duration: 117.089497ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:40:58 up  3:23,  0 users,  load average: 0.08, 16.14, 69.14
	Linux addons-109663 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [9a6bfcbfaf469f95d1c0c8cbed2904943c6b3ed6c03103d1ddd3c1b525a828c9] <==
	I1216 10:38:51.681822       1 main.go:301] handling current node
	I1216 10:39:01.678272       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:39:01.678305       1 main.go:301] handling current node
	I1216 10:39:11.673598       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:39:11.673634       1 main.go:301] handling current node
	I1216 10:39:21.676120       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:39:21.676168       1 main.go:301] handling current node
	I1216 10:39:31.682218       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:39:31.682251       1 main.go:301] handling current node
	I1216 10:39:41.682336       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:39:41.682370       1 main.go:301] handling current node
	I1216 10:39:51.679555       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:39:51.679587       1 main.go:301] handling current node
	I1216 10:40:01.676353       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:40:01.676397       1 main.go:301] handling current node
	I1216 10:40:11.672942       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:40:11.672972       1 main.go:301] handling current node
	I1216 10:40:21.680381       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:40:21.680423       1 main.go:301] handling current node
	I1216 10:40:31.679588       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:40:31.679624       1 main.go:301] handling current node
	I1216 10:40:41.673365       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:40:41.673403       1 main.go:301] handling current node
	I1216 10:40:51.676130       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 10:40:51.676182       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c2d7f9e7ddfbc06209cfd28e0f274033b7c0d8d246840902f5d602801f9c1804] <==
	E1216 10:34:32.691074       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.203.234:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.203.234:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.203.234:443: connect: connection refused" logger="UnhandledError"
	E1216 10:34:32.692656       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.203.234:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.203.234:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.203.234:443: connect: connection refused" logger="UnhandledError"
	I1216 10:34:32.723275       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 10:34:51.896672       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38980: use of closed network connection
	E1216 10:34:52.055857       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:39006: use of closed network connection
	I1216 10:35:01.015105       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.13.210"}
	I1216 10:35:21.588891       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1216 10:35:21.753463       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.56.95"}
	I1216 10:35:23.395373       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1216 10:35:24.473039       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1216 10:35:48.692045       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1216 10:36:01.744832       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:36:01.744890       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 10:36:01.758462       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:36:01.758509       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 10:36:01.758956       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:36:01.759012       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 10:36:01.772864       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:36:01.773001       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 10:36:01.783253       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:36:01.783298       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1216 10:36:02.759800       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1216 10:36:02.784525       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1216 10:36:02.880411       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1216 10:37:42.590874       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.178.29"}
	
	
	==> kube-controller-manager [bb5423f27c7f2a039f4792caef379046c477b0bd38adbf004fa89dfc343f7b50] <==
	E1216 10:38:45.526516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:38:50.797339       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:38:50.797387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:39:10.843239       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:39:10.843282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:39:16.569407       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:39:16.569461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:39:19.890967       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:39:19.891013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:39:34.593847       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:39:34.593887       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:39:51.683204       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:39:51.683262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:39:54.959265       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:39:54.959310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:40:00.453438       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:40:00.453489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:40:29.779648       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:40:29.779699       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:40:34.368598       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:40:34.368638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:40:49.484129       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:40:49.484175       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:40:54.594413       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:40:54.594457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [c1be7640a86c8f90c664638cd41fcf0e1115f9837b800644bb080433f43935cc] <==
	I1216 10:33:04.183324       1 server_linux.go:66] "Using iptables proxy"
	I1216 10:33:04.591563       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1216 10:33:04.678101       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 10:33:05.489706       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 10:33:05.489861       1 server_linux.go:169] "Using iptables Proxier"
	I1216 10:33:05.694015       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 10:33:05.694429       1 server.go:483] "Version info" version="v1.31.2"
	I1216 10:33:05.694453       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 10:33:05.788634       1 config.go:199] "Starting service config controller"
	I1216 10:33:06.171703       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 10:33:06.171734       1 shared_informer.go:320] Caches are synced for service config
	I1216 10:33:05.790467       1 config.go:105] "Starting endpoint slice config controller"
	I1216 10:33:06.171782       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 10:33:06.171788       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 10:33:05.790422       1 config.go:328] "Starting node config controller"
	I1216 10:33:06.171865       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 10:33:06.171872       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c7d6c76bcfec710b57c6b1fe3c19335fd182a394babb6ed68c250307fd00cd53] <==
	W1216 10:32:55.981327       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 10:32:55.981346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1216 10:32:55.981349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E1216 10:32:55.981313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:55.981466       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1216 10:32:55.981508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 10:32:55.981539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:55.981556       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 10:32:55.981551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1216 10:32:55.981651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:55.981595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 10:32:55.981694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:55.981595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1216 10:32:55.981721       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:55.981602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 10:32:55.981745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:56.856381       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1216 10:32:56.856434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:56.883003       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 10:32:56.883044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:56.921571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 10:32:56.921608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:56.963959       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 10:32:56.964001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1216 10:32:57.377547       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 10:39:08 addons-109663 kubelet[1650]: E1216 10:39:08.253890    1650 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345548253752992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:08 addons-109663 kubelet[1650]: E1216 10:39:08.253919    1650 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345548253752992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:18 addons-109663 kubelet[1650]: E1216 10:39:18.256691    1650 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345558256531275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:18 addons-109663 kubelet[1650]: E1216 10:39:18.256721    1650 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345558256531275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:28 addons-109663 kubelet[1650]: E1216 10:39:28.258408    1650 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345568258233539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:28 addons-109663 kubelet[1650]: E1216 10:39:28.258448    1650 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345568258233539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:38 addons-109663 kubelet[1650]: E1216 10:39:38.260654    1650 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345578260499554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:38 addons-109663 kubelet[1650]: E1216 10:39:38.260687    1650 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345578260499554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:43 addons-109663 kubelet[1650]: I1216 10:39:43.001307    1650 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 10:39:48 addons-109663 kubelet[1650]: E1216 10:39:48.262994    1650 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345588262817126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:48 addons-109663 kubelet[1650]: E1216 10:39:48.263039    1650 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345588262817126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:58 addons-109663 kubelet[1650]: E1216 10:39:58.265606    1650 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345598265436045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:58 addons-109663 kubelet[1650]: E1216 10:39:58.265639    1650 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345598265436045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:08 addons-109663 kubelet[1650]: E1216 10:40:08.267992    1650 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345608267720375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:08 addons-109663 kubelet[1650]: E1216 10:40:08.268033    1650 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345608267720375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:18 addons-109663 kubelet[1650]: E1216 10:40:18.270329    1650 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345618270162402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:18 addons-109663 kubelet[1650]: E1216 10:40:18.270357    1650 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345618270162402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:28 addons-109663 kubelet[1650]: E1216 10:40:28.272297    1650 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345628272142420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:28 addons-109663 kubelet[1650]: E1216 10:40:28.272334    1650 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345628272142420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:38 addons-109663 kubelet[1650]: E1216 10:40:38.274444    1650 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345638274252331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:38 addons-109663 kubelet[1650]: E1216 10:40:38.274483    1650 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345638274252331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:48 addons-109663 kubelet[1650]: E1216 10:40:48.276336    1650 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345648276126636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:48 addons-109663 kubelet[1650]: E1216 10:40:48.276377    1650 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345648276126636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:58 addons-109663 kubelet[1650]: E1216 10:40:58.279480    1650 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345658279309646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:58 addons-109663 kubelet[1650]: E1216 10:40:58.279517    1650 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345658279309646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [cbfe74880d2d74600d5e828c17a093b09e9242e83f220b8981aab484b98eba00] <==
	I1216 10:33:23.101386       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 10:33:23.109387       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 10:33:23.109441       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 10:33:23.119725       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 10:33:23.119895       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-109663_ffc62bba-699e-4bb1-b733-f38ab028cbbd!
	I1216 10:33:23.120209       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"172750c2-26af-46b6-a829-2003eae424b5", APIVersion:"v1", ResourceVersion:"892", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-109663_ffc62bba-699e-4bb1-b733-f38ab028cbbd became leader
	I1216 10:33:23.272006       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-109663_ffc62bba-699e-4bb1-b733-f38ab028cbbd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-109663 -n addons-109663
helpers_test.go:261: (dbg) Run:  kubectl --context addons-109663 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (359.69s)

                                                
                                    

Test pass (302/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 4.99
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 4.79
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.68
18 TestDownloadOnly/v1.31.2/DeleteAll 0.53
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.27
20 TestDownloadOnlyKic 1.2
21 TestBinaryMirror 0.75
22 TestOffline 51.27
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 143.11
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 8.44
35 TestAddons/parallel/Registry 16.26
37 TestAddons/parallel/InspektorGadget 12
40 TestAddons/parallel/CSI 46.22
41 TestAddons/parallel/Headlamp 16.56
42 TestAddons/parallel/CloudSpanner 6.48
43 TestAddons/parallel/LocalPath 8.06
44 TestAddons/parallel/NvidiaDevicePlugin 6.45
45 TestAddons/parallel/Yakd 10.71
46 TestAddons/parallel/AmdGpuDevicePlugin 5.45
47 TestAddons/StoppedEnableDisable 12.02
48 TestCertOptions 31.45
49 TestCertExpiration 233.44
51 TestForceSystemdFlag 26.83
52 TestForceSystemdEnv 36.96
54 TestKVMDriverInstallOrUpdate 1.73
58 TestErrorSpam/setup 20.52
59 TestErrorSpam/start 0.55
60 TestErrorSpam/status 0.84
61 TestErrorSpam/pause 1.44
62 TestErrorSpam/unpause 1.74
63 TestErrorSpam/stop 1.35
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 38.46
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 27.24
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.8
75 TestFunctional/serial/CacheCmd/cache/add_local 0.9
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 42.13
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.27
86 TestFunctional/serial/LogsFileCmd 1.27
87 TestFunctional/serial/InvalidService 3.74
89 TestFunctional/parallel/ConfigCmd 0.36
90 TestFunctional/parallel/DashboardCmd 9.29
91 TestFunctional/parallel/DryRun 0.37
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.94
97 TestFunctional/parallel/ServiceCmdConnect 9.6
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 30.5
101 TestFunctional/parallel/SSHCmd 0.7
102 TestFunctional/parallel/CpCmd 2.16
103 TestFunctional/parallel/MySQL 19.5
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 1.85
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
113 TestFunctional/parallel/License 0.16
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.64
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
119 TestFunctional/parallel/MountCmd/any-port 8.58
120 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
121 TestFunctional/parallel/MountCmd/specific-port 2.01
122 TestFunctional/parallel/ServiceCmd/List 0.32
123 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
124 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
126 TestFunctional/parallel/ServiceCmd/Format 0.37
127 TestFunctional/parallel/ProfileCmd/profile_list 0.48
128 TestFunctional/parallel/ServiceCmd/URL 0.43
129 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
131 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
132 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
133 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
134 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
135 TestFunctional/parallel/ImageCommands/ImageBuild 2.55
136 TestFunctional/parallel/ImageCommands/Setup 0.41
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.31
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.25
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.7
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.02
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.3
146 TestFunctional/parallel/ImageCommands/ImageRemove 2.39
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.23
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.58
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/delete_echo-server_images 0.03
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.01
161 TestMultiControlPlane/serial/StartCluster 106.71
162 TestMultiControlPlane/serial/DeployApp 5.12
163 TestMultiControlPlane/serial/PingHostFromPods 1.03
164 TestMultiControlPlane/serial/AddWorkerNode 34.51
165 TestMultiControlPlane/serial/NodeLabels 0.06
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.81
167 TestMultiControlPlane/serial/CopyFile 15.26
168 TestMultiControlPlane/serial/StopSecondaryNode 12.46
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.65
170 TestMultiControlPlane/serial/RestartSecondaryNode 21.82
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 168.01
173 TestMultiControlPlane/serial/DeleteSecondaryNode 11.28
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
175 TestMultiControlPlane/serial/StopCluster 35.52
176 TestMultiControlPlane/serial/RestartCluster 100.06
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
178 TestMultiControlPlane/serial/AddSecondaryNode 38.69
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
183 TestJSONOutput/start/Command 42.37
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.65
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.56
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.69
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
208 TestKicCustomNetwork/create_custom_network 30.43
209 TestKicCustomNetwork/use_default_bridge_network 22.94
210 TestKicExistingNetwork 22.88
211 TestKicCustomSubnet 26.55
212 TestKicStaticIP 25.23
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 45.28
217 TestMountStart/serial/StartWithMountFirst 5.33
218 TestMountStart/serial/VerifyMountFirst 0.24
219 TestMountStart/serial/StartWithMountSecond 5.46
220 TestMountStart/serial/VerifyMountSecond 0.23
221 TestMountStart/serial/DeleteFirst 1.58
222 TestMountStart/serial/VerifyMountPostDelete 0.23
223 TestMountStart/serial/Stop 1.17
224 TestMountStart/serial/RestartStopped 7.08
225 TestMountStart/serial/VerifyMountPostStop 0.24
228 TestMultiNode/serial/FreshStart2Nodes 70.76
229 TestMultiNode/serial/DeployApp2Nodes 4.43
230 TestMultiNode/serial/PingHostFrom2Pods 0.71
231 TestMultiNode/serial/AddNode 28.96
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.59
234 TestMultiNode/serial/CopyFile 8.7
235 TestMultiNode/serial/StopNode 2.05
236 TestMultiNode/serial/StartAfterStop 8.86
237 TestMultiNode/serial/RestartKeepsNodes 98.41
238 TestMultiNode/serial/DeleteNode 5.18
239 TestMultiNode/serial/StopMultiNode 23.7
240 TestMultiNode/serial/RestartMultiNode 53.7
241 TestMultiNode/serial/ValidateNameConflict 21.86
246 TestPreload 103.22
248 TestScheduledStopUnix 96.55
251 TestInsufficientStorage 12.53
252 TestRunningBinaryUpgrade 99.72
254 TestKubernetesUpgrade 350.26
255 TestMissingContainerUpgrade 126.03
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
261 TestNoKubernetes/serial/StartWithK8s 35.28
266 TestNetworkPlugins/group/false 7.34
270 TestNoKubernetes/serial/StartWithStopK8s 8.64
271 TestNoKubernetes/serial/Start 7.05
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
273 TestStoppedBinaryUpgrade/Setup 0.51
274 TestNoKubernetes/serial/ProfileList 1.41
275 TestStoppedBinaryUpgrade/Upgrade 102.83
276 TestNoKubernetes/serial/Stop 1.59
277 TestNoKubernetes/serial/StartNoArgs 7.25
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
279 TestStoppedBinaryUpgrade/MinikubeLogs 2.77
288 TestPause/serial/Start 40.01
289 TestPause/serial/SecondStartNoReconfiguration 27.27
290 TestPause/serial/Pause 0.77
291 TestPause/serial/VerifyStatus 0.31
292 TestPause/serial/Unpause 0.67
293 TestPause/serial/PauseAgain 0.84
294 TestPause/serial/DeletePaused 2.74
295 TestNetworkPlugins/group/auto/Start 44.62
296 TestPause/serial/VerifyDeletedResources 0.79
297 TestNetworkPlugins/group/enable-default-cni/Start 67.44
298 TestNetworkPlugins/group/flannel/Start 46.67
299 TestNetworkPlugins/group/auto/KubeletFlags 0.26
300 TestNetworkPlugins/group/auto/NetCatPod 9.17
301 TestNetworkPlugins/group/auto/DNS 0.12
302 TestNetworkPlugins/group/auto/Localhost 0.1
303 TestNetworkPlugins/group/auto/HairPin 0.11
304 TestNetworkPlugins/group/flannel/ControllerPod 6.01
305 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
306 TestNetworkPlugins/group/flannel/NetCatPod 9.2
307 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
308 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.2
309 TestNetworkPlugins/group/flannel/DNS 0.17
310 TestNetworkPlugins/group/flannel/Localhost 0.13
311 TestNetworkPlugins/group/flannel/HairPin 0.14
312 TestNetworkPlugins/group/calico/Start 52.41
313 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
314 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
315 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
316 TestNetworkPlugins/group/kindnet/Start 43.5
317 TestNetworkPlugins/group/bridge/Start 69.18
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/calico/KubeletFlags 0.29
320 TestNetworkPlugins/group/calico/NetCatPod 10.17
321 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
322 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
323 TestNetworkPlugins/group/kindnet/NetCatPod 9.17
324 TestNetworkPlugins/group/calico/DNS 0.14
325 TestNetworkPlugins/group/calico/Localhost 0.1
326 TestNetworkPlugins/group/calico/HairPin 0.1
327 TestNetworkPlugins/group/kindnet/DNS 0.13
328 TestNetworkPlugins/group/kindnet/Localhost 0.11
329 TestNetworkPlugins/group/kindnet/HairPin 0.11
330 TestNetworkPlugins/group/custom-flannel/Start 51.24
332 TestStartStop/group/old-k8s-version/serial/FirstStart 139.82
333 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
334 TestNetworkPlugins/group/bridge/NetCatPod 11.66
336 TestStartStop/group/no-preload/serial/FirstStart 57.32
337 TestNetworkPlugins/group/bridge/DNS 0.19
338 TestNetworkPlugins/group/bridge/Localhost 0.18
339 TestNetworkPlugins/group/bridge/HairPin 0.2
341 TestStartStop/group/embed-certs/serial/FirstStart 45.59
342 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
343 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.18
344 TestNetworkPlugins/group/custom-flannel/DNS 0.17
345 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
346 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
347 TestStartStop/group/no-preload/serial/DeployApp 10.24
348 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.94
349 TestStartStop/group/no-preload/serial/Stop 11.92
351 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.42
352 TestStartStop/group/embed-certs/serial/DeployApp 8.22
353 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
354 TestStartStop/group/no-preload/serial/SecondStart 262.73
355 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
356 TestStartStop/group/embed-certs/serial/Stop 12.27
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
358 TestStartStop/group/embed-certs/serial/SecondStart 272.6
359 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.25
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.87
361 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.87
362 TestStartStop/group/old-k8s-version/serial/DeployApp 9.36
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.02
365 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.72
366 TestStartStop/group/old-k8s-version/serial/Stop 11.91
367 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
368 TestStartStop/group/old-k8s-version/serial/SecondStart 138.45
369 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
370 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
371 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
372 TestStartStop/group/old-k8s-version/serial/Pause 2.49
374 TestStartStop/group/newest-cni/serial/FirstStart 28.66
375 TestStartStop/group/newest-cni/serial/DeployApp 0
376 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.15
377 TestStartStop/group/newest-cni/serial/Stop 1.19
378 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
379 TestStartStop/group/newest-cni/serial/SecondStart 12.63
380 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
381 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
382 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
384 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
385 TestStartStop/group/newest-cni/serial/Pause 2.59
386 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
387 TestStartStop/group/no-preload/serial/Pause 2.67
388 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
389 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
390 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
391 TestStartStop/group/embed-certs/serial/Pause 2.47
392 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
393 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
394 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
395 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.42
x
+
TestDownloadOnly/v1.20.0/json-events (4.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-708581 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-708581 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.994119025s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (4.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1216 10:32:10.896295  847292 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1216 10:32:10.896403  847292 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-840384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-708581
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-708581: exit status 85 (62.516022ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-708581 | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC |          |
	|         | -p download-only-708581        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 10:32:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 10:32:05.945111  847306 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:32:05.945259  847306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:32:05.945273  847306 out.go:358] Setting ErrFile to fd 2...
	I1216 10:32:05.945280  847306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:32:05.945456  847306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
	W1216 10:32:05.945598  847306 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20107-840384/.minikube/config/config.json: open /home/jenkins/minikube-integration/20107-840384/.minikube/config/config.json: no such file or directory
	I1216 10:32:05.946168  847306 out.go:352] Setting JSON to true
	I1216 10:32:05.947123  847306 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11673,"bootTime":1734333453,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 10:32:05.947232  847306 start.go:139] virtualization: kvm guest
	I1216 10:32:05.949541  847306 out.go:97] [download-only-708581] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1216 10:32:05.949669  847306 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20107-840384/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 10:32:05.949716  847306 notify.go:220] Checking for updates...
	I1216 10:32:05.951069  847306 out.go:169] MINIKUBE_LOCATION=20107
	I1216 10:32:05.952426  847306 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 10:32:05.953694  847306 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig
	I1216 10:32:05.954707  847306 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube
	I1216 10:32:05.955677  847306 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 10:32:05.957631  847306 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 10:32:05.957830  847306 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 10:32:05.979811  847306 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 10:32:05.979925  847306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 10:32:06.036355  847306 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-16 10:32:06.027511427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 10:32:06.036466  847306 docker.go:318] overlay module found
	I1216 10:32:06.037844  847306 out.go:97] Using the docker driver based on user configuration
	I1216 10:32:06.037868  847306 start.go:297] selected driver: docker
	I1216 10:32:06.037880  847306 start.go:901] validating driver "docker" against <nil>
	I1216 10:32:06.037966  847306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 10:32:06.082434  847306 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-16 10:32:06.074060976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 10:32:06.082623  847306 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 10:32:06.083195  847306 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1216 10:32:06.083383  847306 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 10:32:06.085065  847306 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-708581 host does not exist
	  To start a cluster, run: "minikube start -p download-only-708581"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-708581
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (4.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-505735 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-505735 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.792811418s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (4.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1216 10:32:16.086744  847292 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1216 10:32:16.086793  847292 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-840384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-505735
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-505735: exit status 85 (674.50837ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-708581 | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC |                     |
	|         | -p download-only-708581        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC | 16 Dec 24 10:32 UTC |
	| delete  | -p download-only-708581        | download-only-708581 | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC | 16 Dec 24 10:32 UTC |
	| start   | -o=json --download-only        | download-only-505735 | jenkins | v1.34.0 | 16 Dec 24 10:32 UTC |                     |
	|         | -p download-only-505735        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 10:32:11
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 10:32:11.337379  847654 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:32:11.337625  847654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:32:11.337634  847654 out.go:358] Setting ErrFile to fd 2...
	I1216 10:32:11.337639  847654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:32:11.337788  847654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
	I1216 10:32:11.338288  847654 out.go:352] Setting JSON to true
	I1216 10:32:11.339120  847654 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":11678,"bootTime":1734333453,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 10:32:11.339225  847654 start.go:139] virtualization: kvm guest
	I1216 10:32:11.341003  847654 out.go:97] [download-only-505735] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 10:32:11.341179  847654 notify.go:220] Checking for updates...
	I1216 10:32:11.342496  847654 out.go:169] MINIKUBE_LOCATION=20107
	I1216 10:32:11.343743  847654 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 10:32:11.345150  847654 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig
	I1216 10:32:11.346442  847654 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube
	I1216 10:32:11.347736  847654 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 10:32:11.350092  847654 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 10:32:11.350280  847654 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 10:32:11.371438  847654 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 10:32:11.371532  847654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 10:32:11.420003  847654 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-16 10:32:11.411356816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 10:32:11.420101  847654 docker.go:318] overlay module found
	I1216 10:32:11.421647  847654 out.go:97] Using the docker driver based on user configuration
	I1216 10:32:11.421668  847654 start.go:297] selected driver: docker
	I1216 10:32:11.421674  847654 start.go:901] validating driver "docker" against <nil>
	I1216 10:32:11.421772  847654 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 10:32:11.473584  847654 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-16 10:32:11.4648727 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 10:32:11.473771  847654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 10:32:11.474274  847654 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1216 10:32:11.474451  847654 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 10:32:11.476107  847654 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-505735 host does not exist
	  To start a cluster, run: "minikube start -p download-only-505735"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-505735
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.27s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.2s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-072674 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-072674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-072674
--- PASS: TestDownloadOnlyKic (1.20s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 10:32:19.324948  847292 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-516574 --alsologtostderr --binary-mirror http://127.0.0.1:32893 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-516574" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-516574
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestOffline (51.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-642138 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-642138 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (48.928779739s)
helpers_test.go:175: Cleaning up "offline-crio-642138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-642138
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-642138: (2.341084657s)
--- PASS: TestOffline (51.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-109663
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-109663: exit status 85 (54.006299ms)

                                                
                                                
-- stdout --
	* Profile "addons-109663" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-109663"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-109663
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-109663: exit status 85 (54.640715ms)

                                                
                                                
-- stdout --
	* Profile "addons-109663" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-109663"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (143.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-109663 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-109663 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m23.111915444s)
--- PASS: TestAddons/Setup (143.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-109663 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-109663 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-109663 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-109663 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a279b4d5-84e7-4fa3-ad9d-f47db0dc3a25] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a279b4d5-84e7-4fa3-ad9d-f47db0dc3a25] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003888014s
addons_test.go:633: (dbg) Run:  kubectl --context addons-109663 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-109663 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-109663 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.562236ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-rkb22" [9148bfd2-bdfd-42f6-9b6e-f2cb29de4e1e] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002654226s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-w5gg9" [5d79e061-c009-4296-adaf-94ec1a94ed36] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004524184s
addons_test.go:331: (dbg) Run:  kubectl --context addons-109663 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-109663 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-109663 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.457070931s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 ip
2024/12/16 10:35:16 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.26s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-n6b29" [82eec7e0-7295-4218-a6c8-69120d83ffa9] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004228739s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-109663 addons disable inspektor-gadget --alsologtostderr -v=1: (5.991520168s)
--- PASS: TestAddons/parallel/InspektorGadget (12.00s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1216 10:35:22.353170  847292 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 10:35:22.357372  847292 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 10:35:22.357396  847292 kapi.go:107] duration metric: took 4.243848ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.252289ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-109663 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-109663 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [18577ba4-9202-440d-9fe5-b85dfdbfbdd8] Pending
helpers_test.go:344: "task-pv-pod" [18577ba4-9202-440d-9fe5-b85dfdbfbdd8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [18577ba4-9202-440d-9fe5-b85dfdbfbdd8] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003732481s
addons_test.go:511: (dbg) Run:  kubectl --context addons-109663 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-109663 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-109663 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-109663 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-109663 delete pod task-pv-pod: (1.218304156s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-109663 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-109663 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-109663 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [07292c74-48a5-4558-9412-61806490f959] Pending
helpers_test.go:344: "task-pv-pod-restore" [07292c74-48a5-4558-9412-61806490f959] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.002872307s
addons_test.go:553: (dbg) Run:  kubectl --context addons-109663 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-109663 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-109663 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-109663 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.550864588s)
--- PASS: TestAddons/parallel/CSI (46.22s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-109663 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-v474m" [97db9182-98fe-45c3-89ad-f01f34aca221] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-v474m" [97db9182-98fe-45c3-89ad-f01f34aca221] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.002508063s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-109663 addons disable headlamp --alsologtostderr -v=1: (5.793856233s)
--- PASS: TestAddons/parallel/Headlamp (16.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-jgvfk" [9ff6ab57-5b1c-4e90-96d6-bf2c3bfd86c4] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003041935s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.48s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.06s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-109663 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-109663 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-109663 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [43a7333d-8c54-407a-ae73-81448dff9231] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [43a7333d-8c54-407a-ae73-81448dff9231] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [43a7333d-8c54-407a-ae73-81448dff9231] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003916291s
addons_test.go:906: (dbg) Run:  kubectl --context addons-109663 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 ssh "cat /opt/local-path-provisioner/pvc-9e504c9a-bb3a-4229-9525-d31715212760_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-109663 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-109663 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.06s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-k4znm" [94be2280-9ef7-49a1-aed5-ae48c7b50056] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003412459s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-bs66v" [b1e0514d-4db5-455b-8b21-044a53df0685] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003426774s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-109663 addons disable yakd --alsologtostderr -v=1: (5.702585047s)
--- PASS: TestAddons/parallel/Yakd (10.71s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-nhj8x" [483a0808-3e15-4de2-b48a-ecfa43394c55] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003128632s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.02s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-109663
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-109663: (11.778773576s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-109663
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-109663
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-109663
--- PASS: TestAddons/StoppedEnableDisable (12.02s)

                                                
                                    
x
+
TestCertOptions (31.45s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-805971 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-805971 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.592455507s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-805971 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-805971 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-805971 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-805971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-805971
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-805971: (5.163172188s)
--- PASS: TestCertOptions (31.45s)

                                                
                                    
x
+
TestCertExpiration (233.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-237749 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-237749 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.481653988s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-237749 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-237749 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (25.7825645s)
helpers_test.go:175: Cleaning up "cert-expiration-237749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-237749
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-237749: (3.17304434s)
--- PASS: TestCertExpiration (233.44s)

                                                
                                    
x
+
TestForceSystemdFlag (26.83s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-239136 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-239136 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.2906117s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-239136 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-239136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-239136
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-239136: (3.259382305s)
--- PASS: TestForceSystemdFlag (26.83s)

                                                
                                    
x
+
TestForceSystemdEnv (36.96s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-670894 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-670894 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.316635173s)
helpers_test.go:175: Cleaning up "force-systemd-env-670894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-670894
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-670894: (2.642600225s)
--- PASS: TestForceSystemdEnv (36.96s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.73s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1216 11:09:47.110007  847292 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 11:09:47.110152  847292 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1216 11:09:47.139392  847292 install.go:62] docker-machine-driver-kvm2: exit status 1
W1216 11:09:47.139775  847292 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1216 11:09:47.139859  847292 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate672677386/001/docker-machine-driver-kvm2
I1216 11:09:47.519804  847292 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate672677386/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x532a240 0x532a240 0x532a240 0x532a240 0x532a240 0x532a240 0x532a240] Decompressors:map[bz2:0xc0009f0850 gz:0xc0009f0858 tar:0xc0009f0800 tar.bz2:0xc0009f0810 tar.gz:0xc0009f0820 tar.xz:0xc0009f0830 tar.zst:0xc0009f0840 tbz2:0xc0009f0810 tgz:0xc0009f0820 txz:0xc0009f0830 tzst:0xc0009f0840 xz:0xc0009f0860 zip:0xc0009f0870 zst:0xc0009f0868] Getters:map[file:0xc001a31040 http:0xc000158730 https:0xc000158a00] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I1216 11:09:47.519848  847292 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate672677386/001/docker-machine-driver-kvm2
I1216 11:09:48.381573  847292 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 11:09:48.381663  847292 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1216 11:09:48.410943  847292 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1216 11:09:48.410973  847292 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1216 11:09:48.411045  847292 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1216 11:09:48.411073  847292 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate672677386/002/docker-machine-driver-kvm2
I1216 11:09:48.431990  847292 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate672677386/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x532a240 0x532a240 0x532a240 0x532a240 0x532a240 0x532a240 0x532a240] Decompressors:map[bz2:0xc0009f0850 gz:0xc0009f0858 tar:0xc0009f0800 tar.bz2:0xc0009f0810 tar.gz:0xc0009f0820 tar.xz:0xc0009f0830 tar.zst:0xc0009f0840 tbz2:0xc0009f0810 tgz:0xc0009f0820 txz:0xc0009f0830 tzst:0xc0009f0840 xz:0xc0009f0860 zip:0xc0009f0870 zst:0xc0009f0868] Getters:map[file:0xc0001bc310 http:0xc000619180 https:0xc0006191d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I1216 11:09:48.432024  847292 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate672677386/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.73s)

                                                
                                    
x
+
TestErrorSpam/setup (20.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-837507 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-837507 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-837507 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-837507 --driver=docker  --container-runtime=crio: (20.517209802s)
--- PASS: TestErrorSpam/setup (20.52s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 pause
--- PASS: TestErrorSpam/pause (1.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 stop: (1.176281948s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837507 --log_dir /tmp/nospam-837507 stop
--- PASS: TestErrorSpam/stop (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20107-840384/.minikube/files/etc/test/nested/copy/847292/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.46s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-003749 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-003749 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.458688763s)
--- PASS: TestFunctional/serial/StartWithProxy (38.46s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 10:42:27.363670  847292 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-003749 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-003749 --alsologtostderr -v=8: (27.241327333s)
functional_test.go:663: soft start took 27.242164596s for "functional-003749" cluster.
I1216 10:42:54.605422  847292 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (27.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-003749 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-003749 /tmp/TestFunctionalserialCacheCmdcacheadd_local2310196176/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 cache add minikube-local-cache-test:functional-003749
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 cache delete minikube-local-cache-test:functional-003749
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-003749
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003749 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (263.233108ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 kubectl -- --context functional-003749 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-003749 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-003749 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-003749 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.125433528s)
functional_test.go:761: restart took 42.125544796s for "functional-003749" cluster.
I1216 10:43:42.837315  847292 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (42.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-003749 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-003749 logs: (1.268456553s)
--- PASS: TestFunctional/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 logs --file /tmp/TestFunctionalserialLogsFileCmd1593907318/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-003749 logs --file /tmp/TestFunctionalserialLogsFileCmd1593907318/001/logs.txt: (1.266495083s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.74s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-003749 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-003749
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-003749: exit status 115 (313.625454ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31639 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-003749 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.74s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003749 config get cpus: exit status 14 (55.337345ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003749 config get cpus: exit status 14 (59.511062ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-003749 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-003749 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 890169: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-003749 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-003749 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (166.748291ms)

                                                
                                                
-- stdout --
	* [functional-003749] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 10:44:13.459395  889672 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:44:13.459657  889672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:44:13.459669  889672 out.go:358] Setting ErrFile to fd 2...
	I1216 10:44:13.459676  889672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:44:13.459889  889672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
	I1216 10:44:13.460430  889672 out.go:352] Setting JSON to false
	I1216 10:44:13.461395  889672 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12400,"bootTime":1734333453,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 10:44:13.461460  889672 start.go:139] virtualization: kvm guest
	I1216 10:44:13.463701  889672 out.go:177] * [functional-003749] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 10:44:13.465299  889672 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 10:44:13.465354  889672 notify.go:220] Checking for updates...
	I1216 10:44:13.467556  889672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 10:44:13.468660  889672 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig
	I1216 10:44:13.469723  889672 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube
	I1216 10:44:13.470867  889672 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 10:44:13.471961  889672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 10:44:13.474021  889672 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:44:13.474607  889672 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 10:44:13.502616  889672 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 10:44:13.502713  889672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 10:44:13.560954  889672 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-16 10:44:13.54856891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 10:44:13.561063  889672 docker.go:318] overlay module found
	I1216 10:44:13.563514  889672 out.go:177] * Using the docker driver based on existing profile
	I1216 10:44:13.567243  889672 start.go:297] selected driver: docker
	I1216 10:44:13.567260  889672 start.go:901] validating driver "docker" against &{Name:functional-003749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-003749 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 10:44:13.567357  889672 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 10:44:13.569148  889672 out.go:201] 
	W1216 10:44:13.570141  889672 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 10:44:13.571127  889672 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-003749 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-003749 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-003749 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (153.68818ms)

                                                
                                                
-- stdout --
	* [functional-003749] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 10:44:00.933077  886455 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:44:00.933306  886455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:44:00.933314  886455 out.go:358] Setting ErrFile to fd 2...
	I1216 10:44:00.933318  886455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:44:00.933603  886455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
	I1216 10:44:00.934096  886455 out.go:352] Setting JSON to false
	I1216 10:44:00.935122  886455 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":12388,"bootTime":1734333453,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 10:44:00.935191  886455 start.go:139] virtualization: kvm guest
	I1216 10:44:00.936942  886455 out.go:177] * [functional-003749] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1216 10:44:00.938012  886455 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 10:44:00.938070  886455 notify.go:220] Checking for updates...
	I1216 10:44:00.940106  886455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 10:44:00.941202  886455 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig
	I1216 10:44:00.942283  886455 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube
	I1216 10:44:00.943283  886455 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 10:44:00.944387  886455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 10:44:00.945925  886455 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:44:00.946692  886455 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 10:44:00.971637  886455 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 10:44:00.971724  886455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 10:44:01.025018  886455 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:57 SystemTime:2024-12-16 10:44:01.016039223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 10:44:01.025131  886455 docker.go:318] overlay module found
	I1216 10:44:01.027101  886455 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1216 10:44:01.028200  886455 start.go:297] selected driver: docker
	I1216 10:44:01.028216  886455 start.go:901] validating driver "docker" against &{Name:functional-003749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-003749 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 10:44:01.028342  886455 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 10:44:01.030483  886455 out.go:201] 
	W1216 10:44:01.031489  886455 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 10:44:01.032554  886455 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-003749 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-003749 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4b846" [6a635432-ed09-430f-a0ae-74ad460d25aa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4b846" [6a635432-ed09-430f-a0ae-74ad460d25aa] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003473201s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30840
functional_test.go:1675: http://192.168.49.2:30840: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-4b846

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30840
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2c785f57-265c-43cc-a9e5-48e062507e68] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003395749s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-003749 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-003749 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-003749 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-003749 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b203c228-ab2b-4a39-bde4-03c9708615a3] Pending
helpers_test.go:344: "sp-pod" [b203c228-ab2b-4a39-bde4-03c9708615a3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b203c228-ab2b-4a39-bde4-03c9708615a3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.047643848s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-003749 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-003749 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-003749 delete -f testdata/storage-provisioner/pod.yaml: (2.518570024s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-003749 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4f54a4d3-ff00-46b5-907f-6deb81de56b2] Pending
helpers_test.go:344: "sp-pod" [4f54a4d3-ff00-46b5-907f-6deb81de56b2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4f54a4d3-ff00-46b5-907f-6deb81de56b2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.026834056s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-003749 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh -n functional-003749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 cp functional-003749:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3657162352/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh -n functional-003749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh -n functional-003749 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-003749 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-qcj6v" [dc36d45c-32a1-4127-9ec8-a12c710ae170] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-qcj6v" [dc36d45c-32a1-4127-9ec8-a12c710ae170] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.003576809s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-003749 exec mysql-6cdb49bbb-qcj6v -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-003749 exec mysql-6cdb49bbb-qcj6v -- mysql -ppassword -e "show databases;": exit status 1 (161.041143ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 10:44:18.397858  847292 retry.go:31] will retry after 1.241448033s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-003749 exec mysql-6cdb49bbb-qcj6v -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-003749 exec mysql-6cdb49bbb-qcj6v -- mysql -ppassword -e "show databases;": exit status 1 (167.992797ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 10:44:19.808359  847292 retry.go:31] will retry after 1.622816668s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-003749 exec mysql-6cdb49bbb-qcj6v -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/847292/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "sudo cat /etc/test/nested/copy/847292/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/847292.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "sudo cat /etc/ssl/certs/847292.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/847292.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "sudo cat /usr/share/ca-certificates/847292.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/8472922.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "sudo cat /etc/ssl/certs/8472922.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/8472922.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "sudo cat /usr/share/ca-certificates/8472922.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-003749 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003749 ssh "sudo systemctl is-active docker": exit status 1 (280.555628ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003749 ssh "sudo systemctl is-active containerd": exit status 1 (282.595855ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-003749 /tmp/TestFunctionalparallelMountCmdany-port3997477231/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1734345829621367503" to /tmp/TestFunctionalparallelMountCmdany-port3997477231/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1734345829621367503" to /tmp/TestFunctionalparallelMountCmdany-port3997477231/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1734345829621367503" to /tmp/TestFunctionalparallelMountCmdany-port3997477231/001/test-1734345829621367503
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003749 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (300.772285ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 10:43:49.922468  847292 retry.go:31] will retry after 742.474117ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 10:43 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 10:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 10:43 test-1734345829621367503
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh cat /mount-9p/test-1734345829621367503
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-003749 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4304a9f6-63d1-4aa5-9fc7-c2584ca926d9] Pending
helpers_test.go:344: "busybox-mount" [4304a9f6-63d1-4aa5-9fc7-c2584ca926d9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4304a9f6-63d1-4aa5-9fc7-c2584ca926d9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4304a9f6-63d1-4aa5-9fc7-c2584ca926d9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00402156s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-003749 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-003749 /tmp/TestFunctionalparallelMountCmdany-port3997477231/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-003749 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-003749 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-8r8nq" [1116f870-ee73-4d72-849a-9eb3303c371f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-8r8nq" [1116f870-ee73-4d72-849a-9eb3303c371f] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004267871s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-003749 /tmp/TestFunctionalparallelMountCmdspecific-port754989820/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003749 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.902049ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 10:43:58.484163  847292 retry.go:31] will retry after 586.873293ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-003749 /tmp/TestFunctionalparallelMountCmdspecific-port754989820/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003749 ssh "sudo umount -f /mount-9p": exit status 1 (297.884713ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-003749 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-003749 /tmp/TestFunctionalparallelMountCmdspecific-port754989820/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 service list -o json
functional_test.go:1494: Took "331.960167ms" to run "out/minikube-linux-amd64 -p functional-003749 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30305
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "397.46582ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "82.651611ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30305
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-003749 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3802851356/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-003749 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3802851356/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-003749 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3802851356/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003749 ssh "findmnt -T" /mount1: exit status 1 (368.777091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 10:44:00.578784  847292 retry.go:31] will retry after 602.713459ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-003749 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-003749 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3802851356/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-003749 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3802851356/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-003749 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3802851356/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "348.260183ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "60.143008ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-003749 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-003749
localhost/kicbase/echo-server:functional-003749
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-003749 image ls --format short --alsologtostderr:
I1216 10:44:20.865659  890963 out.go:345] Setting OutFile to fd 1 ...
I1216 10:44:20.865778  890963 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:44:20.865787  890963 out.go:358] Setting ErrFile to fd 2...
I1216 10:44:20.865791  890963 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:44:20.866005  890963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
I1216 10:44:20.866653  890963 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:44:20.866794  890963 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:44:20.867222  890963 cli_runner.go:164] Run: docker container inspect functional-003749 --format={{.State.Status}}
I1216 10:44:20.885356  890963 ssh_runner.go:195] Run: systemctl --version
I1216 10:44:20.885427  890963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-003749
I1216 10:44:20.901293  890963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/functional-003749/id_rsa Username:docker}
I1216 10:44:20.988034  890963 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-003749 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-003749  | 8859e92febcf8 | 3.33kB |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | alpine             | 91ca84b4f5779 | 54MB   |
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-003749  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-003749 image ls --format table --alsologtostderr:
I1216 10:44:21.456806  891247 out.go:345] Setting OutFile to fd 1 ...
I1216 10:44:21.456933  891247 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:44:21.456944  891247 out.go:358] Setting ErrFile to fd 2...
I1216 10:44:21.456948  891247 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:44:21.457148  891247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
I1216 10:44:21.457764  891247 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:44:21.457882  891247 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:44:21.458349  891247 cli_runner.go:164] Run: docker container inspect functional-003749 --format={{.State.Status}}
I1216 10:44:21.480407  891247 ssh_runner.go:195] Run: systemctl --version
I1216 10:44:21.480465  891247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-003749
I1216 10:44:21.500959  891247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/functional-003749/id_rsa Username:docker}
I1216 10:44:21.591686  891247 ssh_runner.go:195] Run: sudo crictl images --output json
2024/12/16 10:44:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-003749 image ls --format json --alsologtostderr:
[{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"195919252"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-003749"],"size":"4943877"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"873ed751027
91e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-3
6f62932"],"size":"94965812"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982
b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/et
cd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66","repoDigests":["docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4","docker.io/library/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53958631"},{"id":"8859e92febcf85e2c72053c50b35054d3a22
29cb2aed494205d29996aa7693f0","repoDigests":["localhost/minikube-local-cache-test@sha256:9d92270bd2d3864306214aa867edcf82c48722edc8ac4fd487c1fbee3fc56cd3"],"repoTags":["localhost/minikube-local-cache-test:functional-003749"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"847
c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-003749 image ls --format json --alsologtostderr:
I1216 10:44:21.233004  891159 out.go:345] Setting OutFile to fd 1 ...
I1216 10:44:21.233274  891159 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:44:21.233285  891159 out.go:358] Setting ErrFile to fd 2...
I1216 10:44:21.233290  891159 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:44:21.233502  891159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
I1216 10:44:21.234168  891159 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:44:21.234300  891159 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:44:21.234726  891159 cli_runner.go:164] Run: docker container inspect functional-003749 --format={{.State.Status}}
I1216 10:44:21.251572  891159 ssh_runner.go:195] Run: systemctl --version
I1216 10:44:21.251625  891159 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-003749
I1216 10:44:21.267726  891159 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/functional-003749/id_rsa Username:docker}
I1216 10:44:21.359915  891159 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-003749 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
- docker.io/library/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371
repoTags:
- docker.io/library/nginx:alpine
size: "53958631"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: 8859e92febcf85e2c72053c50b35054d3a2229cb2aed494205d29996aa7693f0
repoDigests:
- localhost/minikube-local-cache-test@sha256:9d92270bd2d3864306214aa867edcf82c48722edc8ac4fd487c1fbee3fc56cd3
repoTags:
- localhost/minikube-local-cache-test:functional-003749
size: "3330"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-003749
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-003749 image ls --format yaml --alsologtostderr:
I1216 10:44:21.012289  891003 out.go:345] Setting OutFile to fd 1 ...
I1216 10:44:21.012544  891003 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:44:21.012553  891003 out.go:358] Setting ErrFile to fd 2...
I1216 10:44:21.012558  891003 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:44:21.012729  891003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
I1216 10:44:21.013281  891003 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:44:21.013382  891003 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:44:21.013758  891003 cli_runner.go:164] Run: docker container inspect functional-003749 --format={{.State.Status}}
I1216 10:44:21.032163  891003 ssh_runner.go:195] Run: systemctl --version
I1216 10:44:21.032225  891003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-003749
I1216 10:44:21.050205  891003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/functional-003749/id_rsa Username:docker}
I1216 10:44:21.140304  891003 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-003749 ssh pgrep buildkitd: exit status 1 (242.319673ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image build -t localhost/my-image:functional-003749 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-003749 image build -t localhost/my-image:functional-003749 testdata/build --alsologtostderr: (2.10077011s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-003749 image build -t localhost/my-image:functional-003749 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3eaba768e3b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-003749
--> 0c66f72f5b0
Successfully tagged localhost/my-image:functional-003749
0c66f72f5b0f622f84268b9fc0487124b37295f8902360315b7a7125536523c8
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-003749 image build -t localhost/my-image:functional-003749 testdata/build --alsologtostderr:
I1216 10:44:21.320788  891195 out.go:345] Setting OutFile to fd 1 ...
I1216 10:44:21.321659  891195 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:44:21.321673  891195 out.go:358] Setting ErrFile to fd 2...
I1216 10:44:21.321678  891195 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:44:21.321845  891195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
I1216 10:44:21.322502  891195 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:44:21.323050  891195 config.go:182] Loaded profile config "functional-003749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:44:21.323453  891195 cli_runner.go:164] Run: docker container inspect functional-003749 --format={{.State.Status}}
I1216 10:44:21.341004  891195 ssh_runner.go:195] Run: systemctl --version
I1216 10:44:21.341043  891195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-003749
I1216 10:44:21.357377  891195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33149 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/functional-003749/id_rsa Username:docker}
I1216 10:44:21.449257  891195 build_images.go:161] Building image from path: /tmp/build.798115404.tar
I1216 10:44:21.449317  891195 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 10:44:21.458382  891195 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.798115404.tar
I1216 10:44:21.461746  891195 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.798115404.tar: stat -c "%s %y" /var/lib/minikube/build/build.798115404.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.798115404.tar': No such file or directory
I1216 10:44:21.461786  891195 ssh_runner.go:362] scp /tmp/build.798115404.tar --> /var/lib/minikube/build/build.798115404.tar (3072 bytes)
I1216 10:44:21.484422  891195 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.798115404
I1216 10:44:21.493626  891195 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.798115404 -xf /var/lib/minikube/build/build.798115404.tar
I1216 10:44:21.502897  891195 crio.go:315] Building image: /var/lib/minikube/build/build.798115404
I1216 10:44:21.502960  891195 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-003749 /var/lib/minikube/build/build.798115404 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1216 10:44:23.351026  891195 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-003749 /var/lib/minikube/build/build.798115404 --cgroup-manager=cgroupfs: (1.84803461s)
I1216 10:44:23.351084  891195 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.798115404
I1216 10:44:23.359344  891195 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.798115404.tar
I1216 10:44:23.366998  891195 build_images.go:217] Built localhost/my-image:functional-003749 from /tmp/build.798115404.tar
I1216 10:44:23.367046  891195 build_images.go:133] succeeded building to: functional-003749
I1216 10:44:23.367053  891195 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-003749
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image load --daemon kicbase/echo-server:functional-003749 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-003749 image load --daemon kicbase/echo-server:functional-003749 --alsologtostderr: (1.059257442s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image load --daemon kicbase/echo-server:functional-003749 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-003749
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image load --daemon kicbase/echo-server:functional-003749 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-003749 image load --daemon kicbase/echo-server:functional-003749 --alsologtostderr: (1.15265037s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-003749 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-003749 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-003749 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-003749 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 888338: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image save kicbase/echo-server:functional-003749 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-003749 image save kicbase/echo-server:functional-003749 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.021548388s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-003749 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-003749 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e60f9ca7-23ca-4f4b-b09c-b5b0738eba73] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e60f9ca7-23ca-4f4b-b09c-b5b0738eba73] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.004104068s
I1216 10:44:20.790036  847292 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image rm kicbase/echo-server:functional-003749 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-003749 image rm kicbase/echo-server:functional-003749 --alsologtostderr: (2.127729856s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-003749 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.012027676s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-003749
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-003749 image save --daemon kicbase/echo-server:functional-003749 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-003749 image save --daemon kicbase/echo-server:functional-003749 --alsologtostderr: (1.536002735s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-003749
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-003749 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.194.137 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-003749 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-003749
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-003749
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-003749
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (106.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-489343 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1216 10:44:43.798153  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:44:43.804496  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:44:43.815926  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:44:43.837264  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:44:43.878596  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:44:43.960620  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:44:44.122034  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:44:44.443746  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:44:45.086010  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:44:46.367701  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:44:48.930567  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:44:54.052356  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:45:04.293848  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:45:24.775919  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:46:05.737726  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-489343 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m46.058445362s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (106.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-489343 -- rollout status deployment/busybox: (3.28648292s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-84h26 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-bflpb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-qxd42 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-84h26 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-bflpb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-qxd42 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-84h26 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-bflpb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-qxd42 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-84h26 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-84h26 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-bflpb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-bflpb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-qxd42 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-489343 -- exec busybox-7dff88458-qxd42 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (34.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-489343 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-489343 -v=7 --alsologtostderr: (33.711919115s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (34.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-489343 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp testdata/cp-test.txt ha-489343:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile349249490/001/cp-test_ha-489343.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343:/home/docker/cp-test.txt ha-489343-m02:/home/docker/cp-test_ha-489343_ha-489343-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m02 "sudo cat /home/docker/cp-test_ha-489343_ha-489343-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343:/home/docker/cp-test.txt ha-489343-m03:/home/docker/cp-test_ha-489343_ha-489343-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m03 "sudo cat /home/docker/cp-test_ha-489343_ha-489343-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343:/home/docker/cp-test.txt ha-489343-m04:/home/docker/cp-test_ha-489343_ha-489343-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m04 "sudo cat /home/docker/cp-test_ha-489343_ha-489343-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp testdata/cp-test.txt ha-489343-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile349249490/001/cp-test_ha-489343-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343-m02:/home/docker/cp-test.txt ha-489343:/home/docker/cp-test_ha-489343-m02_ha-489343.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343 "sudo cat /home/docker/cp-test_ha-489343-m02_ha-489343.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343-m02:/home/docker/cp-test.txt ha-489343-m03:/home/docker/cp-test_ha-489343-m02_ha-489343-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m03 "sudo cat /home/docker/cp-test_ha-489343-m02_ha-489343-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343-m02:/home/docker/cp-test.txt ha-489343-m04:/home/docker/cp-test_ha-489343-m02_ha-489343-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m04 "sudo cat /home/docker/cp-test_ha-489343-m02_ha-489343-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp testdata/cp-test.txt ha-489343-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile349249490/001/cp-test_ha-489343-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343-m03:/home/docker/cp-test.txt ha-489343:/home/docker/cp-test_ha-489343-m03_ha-489343.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343 "sudo cat /home/docker/cp-test_ha-489343-m03_ha-489343.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343-m03:/home/docker/cp-test.txt ha-489343-m02:/home/docker/cp-test_ha-489343-m03_ha-489343-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m02 "sudo cat /home/docker/cp-test_ha-489343-m03_ha-489343-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343-m03:/home/docker/cp-test.txt ha-489343-m04:/home/docker/cp-test_ha-489343-m03_ha-489343-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m04 "sudo cat /home/docker/cp-test_ha-489343-m03_ha-489343-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp testdata/cp-test.txt ha-489343-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile349249490/001/cp-test_ha-489343-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343-m04:/home/docker/cp-test.txt ha-489343:/home/docker/cp-test_ha-489343-m04_ha-489343.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343 "sudo cat /home/docker/cp-test_ha-489343-m04_ha-489343.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343-m04:/home/docker/cp-test.txt ha-489343-m02:/home/docker/cp-test_ha-489343-m04_ha-489343-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m02 "sudo cat /home/docker/cp-test_ha-489343-m04_ha-489343-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 cp ha-489343-m04:/home/docker/cp-test.txt ha-489343-m03:/home/docker/cp-test_ha-489343-m04_ha-489343-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 ssh -n ha-489343-m03 "sudo cat /home/docker/cp-test_ha-489343-m04_ha-489343-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-489343 node stop m02 -v=7 --alsologtostderr: (11.825107504s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-489343 status -v=7 --alsologtostderr: exit status 7 (631.648259ms)

                                                
                                                
-- stdout --
	ha-489343
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-489343-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-489343-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-489343-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 10:47:24.682355  912352 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:47:24.682498  912352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:47:24.682508  912352 out.go:358] Setting ErrFile to fd 2...
	I1216 10:47:24.682513  912352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:47:24.682667  912352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
	I1216 10:47:24.682821  912352 out.go:352] Setting JSON to false
	I1216 10:47:24.682848  912352 mustload.go:65] Loading cluster: ha-489343
	I1216 10:47:24.682933  912352 notify.go:220] Checking for updates...
	I1216 10:47:24.683263  912352 config.go:182] Loaded profile config "ha-489343": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:47:24.683286  912352 status.go:174] checking status of ha-489343 ...
	I1216 10:47:24.683762  912352 cli_runner.go:164] Run: docker container inspect ha-489343 --format={{.State.Status}}
	I1216 10:47:24.701637  912352 status.go:371] ha-489343 host status = "Running" (err=<nil>)
	I1216 10:47:24.701662  912352 host.go:66] Checking if "ha-489343" exists ...
	I1216 10:47:24.701955  912352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-489343
	I1216 10:47:24.718966  912352 host.go:66] Checking if "ha-489343" exists ...
	I1216 10:47:24.719175  912352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 10:47:24.719208  912352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-489343
	I1216 10:47:24.736636  912352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/ha-489343/id_rsa Username:docker}
	I1216 10:47:24.828243  912352 ssh_runner.go:195] Run: systemctl --version
	I1216 10:47:24.832102  912352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 10:47:24.842069  912352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 10:47:24.890689  912352 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-12-16 10:47:24.881067017 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 10:47:24.891300  912352 kubeconfig.go:125] found "ha-489343" server: "https://192.168.49.254:8443"
	I1216 10:47:24.891341  912352 api_server.go:166] Checking apiserver status ...
	I1216 10:47:24.891396  912352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 10:47:24.902232  912352 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1531/cgroup
	I1216 10:47:24.911068  912352 api_server.go:182] apiserver freezer: "13:freezer:/docker/0b85bc846708b212e7d90c694eeb883cc39e815a611af756099fd650a66cd073/crio/crio-e6741e1e0a5f2e3131f1d9d2185453fe473cede066b6e4eabba9db18591c91ed"
	I1216 10:47:24.911130  912352 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0b85bc846708b212e7d90c694eeb883cc39e815a611af756099fd650a66cd073/crio/crio-e6741e1e0a5f2e3131f1d9d2185453fe473cede066b6e4eabba9db18591c91ed/freezer.state
	I1216 10:47:24.918454  912352 api_server.go:204] freezer state: "THAWED"
	I1216 10:47:24.918479  912352 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 10:47:24.922225  912352 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 10:47:24.922256  912352 status.go:463] ha-489343 apiserver status = Running (err=<nil>)
	I1216 10:47:24.922266  912352 status.go:176] ha-489343 status: &{Name:ha-489343 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 10:47:24.922281  912352 status.go:174] checking status of ha-489343-m02 ...
	I1216 10:47:24.922608  912352 cli_runner.go:164] Run: docker container inspect ha-489343-m02 --format={{.State.Status}}
	I1216 10:47:24.938918  912352 status.go:371] ha-489343-m02 host status = "Stopped" (err=<nil>)
	I1216 10:47:24.938940  912352 status.go:384] host is not running, skipping remaining checks
	I1216 10:47:24.938947  912352 status.go:176] ha-489343-m02 status: &{Name:ha-489343-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 10:47:24.938985  912352 status.go:174] checking status of ha-489343-m03 ...
	I1216 10:47:24.939293  912352 cli_runner.go:164] Run: docker container inspect ha-489343-m03 --format={{.State.Status}}
	I1216 10:47:24.954856  912352 status.go:371] ha-489343-m03 host status = "Running" (err=<nil>)
	I1216 10:47:24.954874  912352 host.go:66] Checking if "ha-489343-m03" exists ...
	I1216 10:47:24.955132  912352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-489343-m03
	I1216 10:47:24.971882  912352 host.go:66] Checking if "ha-489343-m03" exists ...
	I1216 10:47:24.972205  912352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 10:47:24.972241  912352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-489343-m03
	I1216 10:47:24.988559  912352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33164 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/ha-489343-m03/id_rsa Username:docker}
	I1216 10:47:25.076415  912352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 10:47:25.088063  912352 kubeconfig.go:125] found "ha-489343" server: "https://192.168.49.254:8443"
	I1216 10:47:25.088088  912352 api_server.go:166] Checking apiserver status ...
	I1216 10:47:25.088116  912352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 10:47:25.097250  912352 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1448/cgroup
	I1216 10:47:25.105449  912352 api_server.go:182] apiserver freezer: "13:freezer:/docker/85258cc32cba43369ad7d9bc16a21ef03df71f93d7132348d40bc7f3f02440be/crio/crio-70fda1665242cdf8f119f13399cb392593555f3bbacb18dfdd0b4175d390ac46"
	I1216 10:47:25.105528  912352 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/85258cc32cba43369ad7d9bc16a21ef03df71f93d7132348d40bc7f3f02440be/crio/crio-70fda1665242cdf8f119f13399cb392593555f3bbacb18dfdd0b4175d390ac46/freezer.state
	I1216 10:47:25.113111  912352 api_server.go:204] freezer state: "THAWED"
	I1216 10:47:25.113143  912352 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 10:47:25.118451  912352 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 10:47:25.118469  912352 status.go:463] ha-489343-m03 apiserver status = Running (err=<nil>)
	I1216 10:47:25.118478  912352 status.go:176] ha-489343-m03 status: &{Name:ha-489343-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 10:47:25.118494  912352 status.go:174] checking status of ha-489343-m04 ...
	I1216 10:47:25.118712  912352 cli_runner.go:164] Run: docker container inspect ha-489343-m04 --format={{.State.Status}}
	I1216 10:47:25.135309  912352 status.go:371] ha-489343-m04 host status = "Running" (err=<nil>)
	I1216 10:47:25.135330  912352 host.go:66] Checking if "ha-489343-m04" exists ...
	I1216 10:47:25.135643  912352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-489343-m04
	I1216 10:47:25.152549  912352 host.go:66] Checking if "ha-489343-m04" exists ...
	I1216 10:47:25.152763  912352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 10:47:25.152799  912352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-489343-m04
	I1216 10:47:25.168617  912352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33169 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/ha-489343-m04/id_rsa Username:docker}
	I1216 10:47:25.255986  912352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 10:47:25.265982  912352 status.go:176] ha-489343-m04 status: &{Name:ha-489343-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 node start m02 -v=7 --alsologtostderr
E1216 10:47:27.659643  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-489343 node start m02 -v=7 --alsologtostderr: (20.582157036s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-489343 status -v=7 --alsologtostderr: (1.157635362s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (168.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-489343 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-489343 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-489343 -v=7 --alsologtostderr: (36.629022558s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-489343 --wait=true -v=7 --alsologtostderr
E1216 10:48:49.179264  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:48:49.185654  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:48:49.196969  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:48:49.218496  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:48:49.259905  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:48:49.341269  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:48:49.502696  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:48:49.824364  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:48:50.466478  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:48:51.748112  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:48:54.310120  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:48:59.432130  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:49:09.674399  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:49:30.156260  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:49:43.797630  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:50:11.118151  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:50:11.501883  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-489343 --wait=true -v=7 --alsologtostderr: (2m11.269063481s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-489343
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (168.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-489343 node delete m03 -v=7 --alsologtostderr: (10.514109078s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-489343 stop -v=7 --alsologtostderr: (35.41574779s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-489343 status -v=7 --alsologtostderr: exit status 7 (101.646288ms)

                                                
                                                
-- stdout --
	ha-489343
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-489343-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-489343-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 10:51:24.008396  929604 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:51:24.008494  929604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:51:24.008502  929604 out.go:358] Setting ErrFile to fd 2...
	I1216 10:51:24.008506  929604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:51:24.008681  929604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
	I1216 10:51:24.008853  929604 out.go:352] Setting JSON to false
	I1216 10:51:24.008880  929604 mustload.go:65] Loading cluster: ha-489343
	I1216 10:51:24.008918  929604 notify.go:220] Checking for updates...
	I1216 10:51:24.009279  929604 config.go:182] Loaded profile config "ha-489343": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:51:24.009305  929604 status.go:174] checking status of ha-489343 ...
	I1216 10:51:24.009799  929604 cli_runner.go:164] Run: docker container inspect ha-489343 --format={{.State.Status}}
	I1216 10:51:24.031175  929604 status.go:371] ha-489343 host status = "Stopped" (err=<nil>)
	I1216 10:51:24.031206  929604 status.go:384] host is not running, skipping remaining checks
	I1216 10:51:24.031212  929604 status.go:176] ha-489343 status: &{Name:ha-489343 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 10:51:24.031235  929604 status.go:174] checking status of ha-489343-m02 ...
	I1216 10:51:24.031495  929604 cli_runner.go:164] Run: docker container inspect ha-489343-m02 --format={{.State.Status}}
	I1216 10:51:24.046725  929604 status.go:371] ha-489343-m02 host status = "Stopped" (err=<nil>)
	I1216 10:51:24.046742  929604 status.go:384] host is not running, skipping remaining checks
	I1216 10:51:24.046748  929604 status.go:176] ha-489343-m02 status: &{Name:ha-489343-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 10:51:24.046767  929604 status.go:174] checking status of ha-489343-m04 ...
	I1216 10:51:24.046975  929604 cli_runner.go:164] Run: docker container inspect ha-489343-m04 --format={{.State.Status}}
	I1216 10:51:24.062639  929604 status.go:371] ha-489343-m04 host status = "Stopped" (err=<nil>)
	I1216 10:51:24.062659  929604 status.go:384] host is not running, skipping remaining checks
	I1216 10:51:24.062665  929604 status.go:176] ha-489343-m04 status: &{Name:ha-489343-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (100.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-489343 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1216 10:51:33.039640  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-489343 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m39.320379555s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (100.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (38.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-489343 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-489343 --control-plane -v=7 --alsologtostderr: (37.895098189s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-489343 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (38.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-738011 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1216 10:54:16.881578  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-738011 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (42.371312952s)
--- PASS: TestJSONOutput/start/Command (42.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-738011 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-738011 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-738011 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-738011 --output=json --user=testUser: (5.686069537s)
--- PASS: TestJSONOutput/stop/Command (5.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-391413 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-391413 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.646064ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"53672323-43cb-4d43-88e2-88ae3354f620","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-391413] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4aa52e7-e65b-4242-9877-7fc58bf34260","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20107"}}
	{"specversion":"1.0","id":"f3d42d27-32f5-4a21-a0c8-f0097b26a1e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7cfbe137-8204-4664-9d61-050eb9bf97c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig"}}
	{"specversion":"1.0","id":"abfd5eb7-83b9-4c7f-9274-207aef1437a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube"}}
	{"specversion":"1.0","id":"977216db-f93e-4c88-82b0-283bf9b4563f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9a729f7d-52d7-473c-b444-7347b3719c61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c47396de-e043-4128-b041-46e04efc403e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-391413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-391413
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-553247 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-553247 --network=: (28.380042495s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-553247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-553247
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-553247: (2.033906892s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.43s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.94s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-136632 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-136632 --network=bridge: (21.013553796s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-136632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-136632
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-136632: (1.904869474s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.94s)

                                                
                                    
x
+
TestKicExistingNetwork (22.88s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1216 10:55:39.878299  847292 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1216 10:55:39.893339  847292 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1216 10:55:39.893411  847292 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1216 10:55:39.893429  847292 cli_runner.go:164] Run: docker network inspect existing-network
W1216 10:55:39.908432  847292 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1216 10:55:39.908459  847292 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1216 10:55:39.908472  847292 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1216 10:55:39.908615  847292 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 10:55:39.923946  847292 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fd227cccb014 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:31:3b:a4:54} reservation:<nil>}
I1216 10:55:39.924470  847292 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000126b70}
I1216 10:55:39.924508  847292 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1216 10:55:39.924552  847292 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1216 10:55:39.981960  847292 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-985653 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-985653 --network=existing-network: (20.874556069s)
helpers_test.go:175: Cleaning up "existing-network-985653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-985653
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-985653: (1.874108019s)
I1216 10:56:02.746757  847292 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.88s)

                                                
                                    
x
+
TestKicCustomSubnet (26.55s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-803883 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-803883 --subnet=192.168.60.0/24: (24.492335873s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-803883 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-803883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-803883
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-803883: (2.041182477s)
--- PASS: TestKicCustomSubnet (26.55s)

                                                
                                    
x
+
TestKicStaticIP (25.23s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-187587 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-187587 --static-ip=192.168.200.200: (23.141809828s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-187587 ip
helpers_test.go:175: Cleaning up "static-ip-187587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-187587
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-187587: (1.9575935s)
--- PASS: TestKicStaticIP (25.23s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (45.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-322124 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-322124 --driver=docker  --container-runtime=crio: (20.252173286s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-344944 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-344944 --driver=docker  --container-runtime=crio: (19.897575293s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-322124
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-344944
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-344944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-344944
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-344944: (1.81591203s)
helpers_test.go:175: Cleaning up "first-322124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-322124
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-322124: (2.204502569s)
--- PASS: TestMinikubeProfile (45.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-590431 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-590431 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.334359649s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-590431 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-605609 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-605609 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.462880168s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-605609 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-590431 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-590431 --alsologtostderr -v=5: (1.581970347s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-605609 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-605609
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-605609: (1.16835092s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.08s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-605609
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-605609: (6.079729501s)
--- PASS: TestMountStart/serial/RestartStopped (7.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-605609 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-736390 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1216 10:58:49.181883  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-736390 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m10.334089736s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-736390 -- rollout status deployment/busybox: (3.09727062s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- exec busybox-7dff88458-hx9bp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- exec busybox-7dff88458-vfrgg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- exec busybox-7dff88458-hx9bp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- exec busybox-7dff88458-vfrgg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- exec busybox-7dff88458-hx9bp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- exec busybox-7dff88458-vfrgg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.43s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- exec busybox-7dff88458-hx9bp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- exec busybox-7dff88458-hx9bp -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- exec busybox-7dff88458-vfrgg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-736390 -- exec busybox-7dff88458-vfrgg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-736390 -v 3 --alsologtostderr
E1216 10:59:43.798739  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-736390 -v 3 --alsologtostderr: (28.385709374s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.96s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-736390 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 cp testdata/cp-test.txt multinode-736390:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 cp multinode-736390:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2203820829/001/cp-test_multinode-736390.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 cp multinode-736390:/home/docker/cp-test.txt multinode-736390-m02:/home/docker/cp-test_multinode-736390_multinode-736390-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390-m02 "sudo cat /home/docker/cp-test_multinode-736390_multinode-736390-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 cp multinode-736390:/home/docker/cp-test.txt multinode-736390-m03:/home/docker/cp-test_multinode-736390_multinode-736390-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390-m03 "sudo cat /home/docker/cp-test_multinode-736390_multinode-736390-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 cp testdata/cp-test.txt multinode-736390-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 cp multinode-736390-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2203820829/001/cp-test_multinode-736390-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 cp multinode-736390-m02:/home/docker/cp-test.txt multinode-736390:/home/docker/cp-test_multinode-736390-m02_multinode-736390.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390 "sudo cat /home/docker/cp-test_multinode-736390-m02_multinode-736390.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 cp multinode-736390-m02:/home/docker/cp-test.txt multinode-736390-m03:/home/docker/cp-test_multinode-736390-m02_multinode-736390-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390-m03 "sudo cat /home/docker/cp-test_multinode-736390-m02_multinode-736390-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 cp testdata/cp-test.txt multinode-736390-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 cp multinode-736390-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2203820829/001/cp-test_multinode-736390-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 cp multinode-736390-m03:/home/docker/cp-test.txt multinode-736390:/home/docker/cp-test_multinode-736390-m03_multinode-736390.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390 "sudo cat /home/docker/cp-test_multinode-736390-m03_multinode-736390.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 cp multinode-736390-m03:/home/docker/cp-test.txt multinode-736390-m02:/home/docker/cp-test_multinode-736390-m03_multinode-736390-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 ssh -n multinode-736390-m02 "sudo cat /home/docker/cp-test_multinode-736390-m03_multinode-736390-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-736390 node stop m03: (1.172189805s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-736390 status: exit status 7 (442.730306ms)

                                                
                                                
-- stdout --
	multinode-736390
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-736390-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-736390-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-736390 status --alsologtostderr: exit status 7 (436.301197ms)

                                                
                                                
-- stdout --
	multinode-736390
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-736390-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-736390-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 10:59:59.054893  995431 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:59:59.055006  995431 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:59:59.055015  995431 out.go:358] Setting ErrFile to fd 2...
	I1216 10:59:59.055019  995431 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:59:59.055187  995431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
	I1216 10:59:59.055338  995431 out.go:352] Setting JSON to false
	I1216 10:59:59.055364  995431 mustload.go:65] Loading cluster: multinode-736390
	I1216 10:59:59.055403  995431 notify.go:220] Checking for updates...
	I1216 10:59:59.055784  995431 config.go:182] Loaded profile config "multinode-736390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:59:59.055806  995431 status.go:174] checking status of multinode-736390 ...
	I1216 10:59:59.056246  995431 cli_runner.go:164] Run: docker container inspect multinode-736390 --format={{.State.Status}}
	I1216 10:59:59.074465  995431 status.go:371] multinode-736390 host status = "Running" (err=<nil>)
	I1216 10:59:59.074488  995431 host.go:66] Checking if "multinode-736390" exists ...
	I1216 10:59:59.074738  995431 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-736390
	I1216 10:59:59.090558  995431 host.go:66] Checking if "multinode-736390" exists ...
	I1216 10:59:59.090788  995431 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 10:59:59.090831  995431 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-736390
	I1216 10:59:59.106461  995431 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33274 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/multinode-736390/id_rsa Username:docker}
	I1216 10:59:59.192123  995431 ssh_runner.go:195] Run: systemctl --version
	I1216 10:59:59.195847  995431 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 10:59:59.205656  995431 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 10:59:59.252593  995431 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-12-16 10:59:59.244186474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 10:59:59.253214  995431 kubeconfig.go:125] found "multinode-736390" server: "https://192.168.67.2:8443"
	I1216 10:59:59.253254  995431 api_server.go:166] Checking apiserver status ...
	I1216 10:59:59.253298  995431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 10:59:59.263484  995431 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	I1216 10:59:59.271682  995431 api_server.go:182] apiserver freezer: "13:freezer:/docker/9ec219179e8c83b6f566a2a0c23ddf4bfa8cb7849bfb719f12f229ab751bc010/crio/crio-5377e55c46bc995dbd6ff0877d8912ababa14f53e176ca4a470c5cedac626254"
	I1216 10:59:59.271736  995431 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9ec219179e8c83b6f566a2a0c23ddf4bfa8cb7849bfb719f12f229ab751bc010/crio/crio-5377e55c46bc995dbd6ff0877d8912ababa14f53e176ca4a470c5cedac626254/freezer.state
	I1216 10:59:59.278792  995431 api_server.go:204] freezer state: "THAWED"
	I1216 10:59:59.278810  995431 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1216 10:59:59.282352  995431 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1216 10:59:59.282374  995431 status.go:463] multinode-736390 apiserver status = Running (err=<nil>)
	I1216 10:59:59.282389  995431 status.go:176] multinode-736390 status: &{Name:multinode-736390 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 10:59:59.282412  995431 status.go:174] checking status of multinode-736390-m02 ...
	I1216 10:59:59.282673  995431 cli_runner.go:164] Run: docker container inspect multinode-736390-m02 --format={{.State.Status}}
	I1216 10:59:59.298785  995431 status.go:371] multinode-736390-m02 host status = "Running" (err=<nil>)
	I1216 10:59:59.298805  995431 host.go:66] Checking if "multinode-736390-m02" exists ...
	I1216 10:59:59.299042  995431 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-736390-m02
	I1216 10:59:59.315515  995431 host.go:66] Checking if "multinode-736390-m02" exists ...
	I1216 10:59:59.315734  995431 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 10:59:59.315768  995431 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-736390-m02
	I1216 10:59:59.332297  995431 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33279 SSHKeyPath:/home/jenkins/minikube-integration/20107-840384/.minikube/machines/multinode-736390-m02/id_rsa Username:docker}
	I1216 10:59:59.415922  995431 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 10:59:59.426145  995431 status.go:176] multinode-736390-m02 status: &{Name:multinode-736390-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1216 10:59:59.426173  995431 status.go:174] checking status of multinode-736390-m03 ...
	I1216 10:59:59.426433  995431 cli_runner.go:164] Run: docker container inspect multinode-736390-m03 --format={{.State.Status}}
	I1216 10:59:59.442722  995431 status.go:371] multinode-736390-m03 host status = "Stopped" (err=<nil>)
	I1216 10:59:59.442741  995431 status.go:384] host is not running, skipping remaining checks
	I1216 10:59:59.442752  995431 status.go:176] multinode-736390-m03 status: &{Name:multinode-736390-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.05s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-736390 node start m03 -v=7 --alsologtostderr: (8.224748599s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-736390
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-736390
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-736390: (24.663355951s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-736390 --wait=true -v=8 --alsologtostderr
E1216 11:01:06.864257  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-736390 --wait=true -v=8 --alsologtostderr: (1m13.650634187s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-736390
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-736390 node delete m03: (4.636363666s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-736390 stop: (23.529481028s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-736390 status: exit status 7 (88.471689ms)

                                                
                                                
-- stdout --
	multinode-736390
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-736390-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-736390 status --alsologtostderr: exit status 7 (83.237909ms)

                                                
                                                
-- stdout --
	multinode-736390
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-736390-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:02:15.557779 1005167 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:02:15.557886 1005167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:02:15.557895 1005167 out.go:358] Setting ErrFile to fd 2...
	I1216 11:02:15.557899 1005167 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:02:15.558089 1005167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
	I1216 11:02:15.558261 1005167 out.go:352] Setting JSON to false
	I1216 11:02:15.558289 1005167 mustload.go:65] Loading cluster: multinode-736390
	I1216 11:02:15.558421 1005167 notify.go:220] Checking for updates...
	I1216 11:02:15.558823 1005167 config.go:182] Loaded profile config "multinode-736390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:02:15.558852 1005167 status.go:174] checking status of multinode-736390 ...
	I1216 11:02:15.559397 1005167 cli_runner.go:164] Run: docker container inspect multinode-736390 --format={{.State.Status}}
	I1216 11:02:15.577770 1005167 status.go:371] multinode-736390 host status = "Stopped" (err=<nil>)
	I1216 11:02:15.577806 1005167 status.go:384] host is not running, skipping remaining checks
	I1216 11:02:15.577818 1005167 status.go:176] multinode-736390 status: &{Name:multinode-736390 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 11:02:15.577878 1005167 status.go:174] checking status of multinode-736390-m02 ...
	I1216 11:02:15.578140 1005167 cli_runner.go:164] Run: docker container inspect multinode-736390-m02 --format={{.State.Status}}
	I1216 11:02:15.593709 1005167 status.go:371] multinode-736390-m02 host status = "Stopped" (err=<nil>)
	I1216 11:02:15.593735 1005167 status.go:384] host is not running, skipping remaining checks
	I1216 11:02:15.593740 1005167 status.go:176] multinode-736390-m02 status: &{Name:multinode-736390-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-736390 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-736390 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (53.138974674s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-736390 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.70s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-736390
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-736390-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-736390-m02 --driver=docker  --container-runtime=crio: exit status 14 (60.764992ms)

                                                
                                                
-- stdout --
	* [multinode-736390-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-736390-m02' is duplicated with machine name 'multinode-736390-m02' in profile 'multinode-736390'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-736390-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-736390-m03 --driver=docker  --container-runtime=crio: (19.644811905s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-736390
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-736390: exit status 80 (264.339304ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-736390 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-736390-m03 already exists in multinode-736390-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-736390-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-736390-m03: (1.83819804s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (21.86s)

                                                
                                    
x
+
TestPreload (103.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-443364 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1216 11:03:49.179699  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:04:43.797335  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-443364 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m16.554588048s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-443364 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-443364 image pull gcr.io/k8s-minikube/busybox: (2.166231333s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-443364
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-443364: (5.674369306s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-443364 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1216 11:05:12.243791  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-443364 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (16.266534392s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-443364 image list
helpers_test.go:175: Cleaning up "test-preload-443364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-443364
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-443364: (2.310413906s)
--- PASS: TestPreload (103.22s)

                                                
                                    
x
+
TestScheduledStopUnix (96.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-833752 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-833752 --memory=2048 --driver=docker  --container-runtime=crio: (20.447299978s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-833752 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-833752 -n scheduled-stop-833752
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-833752 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1216 11:05:39.161155  847292 retry.go:31] will retry after 61.872µs: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.162325  847292 retry.go:31] will retry after 215.505µs: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.163488  847292 retry.go:31] will retry after 277.026µs: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.164624  847292 retry.go:31] will retry after 390.401µs: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.165755  847292 retry.go:31] will retry after 475.672µs: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.166867  847292 retry.go:31] will retry after 585.102µs: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.167982  847292 retry.go:31] will retry after 753.286µs: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.169111  847292 retry.go:31] will retry after 1.157425ms: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.171267  847292 retry.go:31] will retry after 2.285516ms: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.174499  847292 retry.go:31] will retry after 3.023211ms: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.177623  847292 retry.go:31] will retry after 4.690059ms: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.182937  847292 retry.go:31] will retry after 5.078185ms: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.188067  847292 retry.go:31] will retry after 14.706538ms: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.203288  847292 retry.go:31] will retry after 10.898528ms: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.214514  847292 retry.go:31] will retry after 20.971279ms: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
I1216 11:05:39.235711  847292 retry.go:31] will retry after 55.451001ms: open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/scheduled-stop-833752/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-833752 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-833752 -n scheduled-stop-833752
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-833752
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-833752 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-833752
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-833752: exit status 7 (69.753922ms)

                                                
                                                
-- stdout --
	scheduled-stop-833752
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-833752 -n scheduled-stop-833752
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-833752 -n scheduled-stop-833752: exit status 7 (68.245356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-833752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-833752
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-833752: (4.779098934s)
--- PASS: TestScheduledStopUnix (96.55s)

                                                
                                    
x
+
TestInsufficientStorage (12.53s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-482947 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-482947 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.245758191s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2ccdd53d-41c6-46b3-b7bd-5e9f15e47708","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-482947] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bbef7d43-df47-477b-b5a3-748b068f2479","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20107"}}
	{"specversion":"1.0","id":"71e0419c-5c86-44f2-b960-8e4a2d8ae7dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"654f409b-bb98-4bf0-9a0a-4e7a56ca3378","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig"}}
	{"specversion":"1.0","id":"f0c66325-7545-43ec-a48f-e34cf3bac5d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube"}}
	{"specversion":"1.0","id":"ea5ba132-29c1-44d4-a05a-33e2655ea5d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b611cfb8-6b02-41be-b489-b9def0736c0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6aba0f7f-0487-414e-b014-25dac319724a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0b5f6b9b-ba8f-4d6b-9ec2-9b8124d2d87e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4870cd4e-60d1-4b62-830a-39e1fd8493a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"89bdea0a-a17a-4255-aca7-915ba06e0ce8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ced83dec-26ce-43ec-b574-8c8aafda9652","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-482947\" primary control-plane node in \"insufficient-storage-482947\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c2b4e576-cc48-4839-8558-15719bbcb666","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1733912881-20083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9384e25-51ae-47d3-a851-404b653147c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"51c4ded3-df29-460a-a781-39fb8a83c1ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-482947 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-482947 --output=json --layout=cluster: exit status 7 (252.8781ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-482947","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-482947","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:07:05.357774 1027720 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-482947" does not appear in /home/jenkins/minikube-integration/20107-840384/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-482947 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-482947 --output=json --layout=cluster: exit status 7 (247.099645ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-482947","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-482947","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:07:05.605607 1027820 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-482947" does not appear in /home/jenkins/minikube-integration/20107-840384/kubeconfig
	E1216 11:07:05.615277 1027820 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/insufficient-storage-482947/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-482947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-482947
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-482947: (1.784249539s)
--- PASS: TestInsufficientStorage (12.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (99.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.552877049 start -p running-upgrade-435099 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.552877049 start -p running-upgrade-435099 --memory=2200 --vm-driver=docker  --container-runtime=crio: (30.12474005s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-435099 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-435099 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m6.394635184s)
helpers_test.go:175: Cleaning up "running-upgrade-435099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-435099
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-435099: (2.793545951s)
--- PASS: TestRunningBinaryUpgrade (99.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (350.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-693444 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1216 11:08:49.179683  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-693444 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.235520667s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-693444
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-693444: (1.21887791s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-693444 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-693444 status --format={{.Host}}: exit status 7 (77.320658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-693444 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-693444 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.883421443s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-693444 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-693444 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-693444 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (100.304854ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-693444] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-693444
	    minikube start -p kubernetes-upgrade-693444 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6934442 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-693444 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-693444 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-693444 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.310345512s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-693444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-693444
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-693444: (2.369279602s)
--- PASS: TestKubernetesUpgrade (350.26s)

                                                
                                    
x
+
TestMissingContainerUpgrade (126.03s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3231443382 start -p missing-upgrade-914366 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3231443382 start -p missing-upgrade-914366 --memory=2200 --driver=docker  --container-runtime=crio: (1m1.60322396s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-914366
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-914366: (10.497477114s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-914366
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-914366 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-914366 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.421741434s)
helpers_test.go:175: Cleaning up "missing-upgrade-914366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-914366
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-914366: (1.993844245s)
--- PASS: TestMissingContainerUpgrade (126.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665064 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-665064 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (88.889272ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-665064] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665064 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-665064 --driver=docker  --container-runtime=crio: (34.962010999s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-665064 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (7.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-275161 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-275161 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (190.666532ms)

                                                
                                                
-- stdout --
	* [false-275161] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:07:11.509744 1030122 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:07:11.509929 1030122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:07:11.509944 1030122 out.go:358] Setting ErrFile to fd 2...
	I1216 11:07:11.509952 1030122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:07:11.510245 1030122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-840384/.minikube/bin
	I1216 11:07:11.511125 1030122 out.go:352] Setting JSON to false
	I1216 11:07:11.512628 1030122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":13778,"bootTime":1734333453,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 11:07:11.512774 1030122 start.go:139] virtualization: kvm guest
	I1216 11:07:11.515022 1030122 out.go:177] * [false-275161] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 11:07:11.516371 1030122 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:07:11.516375 1030122 notify.go:220] Checking for updates...
	I1216 11:07:11.519078 1030122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:07:11.520526 1030122 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-840384/kubeconfig
	I1216 11:07:11.521826 1030122 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-840384/.minikube
	I1216 11:07:11.523176 1030122 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 11:07:11.524452 1030122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:07:11.526178 1030122 config.go:182] Loaded profile config "NoKubernetes-665064": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:07:11.526362 1030122 config.go:182] Loaded profile config "force-systemd-env-670894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:07:11.526513 1030122 config.go:182] Loaded profile config "offline-crio-642138": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:07:11.526658 1030122 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:07:11.555188 1030122 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 11:07:11.555311 1030122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:07:11.620896 1030122 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:true NGoroutines:90 SystemTime:2024-12-16 11:07:11.608970982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1216 11:07:11.621069 1030122 docker.go:318] overlay module found
	I1216 11:07:11.625149 1030122 out.go:177] * Using the docker driver based on user configuration
	I1216 11:07:11.626369 1030122 start.go:297] selected driver: docker
	I1216 11:07:11.626384 1030122 start.go:901] validating driver "docker" against <nil>
	I1216 11:07:11.626398 1030122 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:07:11.628928 1030122 out.go:201] 
	W1216 11:07:11.630240 1030122 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1216 11:07:11.631397 1030122 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-275161 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-275161

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-275161

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-275161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-275161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-275161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-275161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-275161

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-275161

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-275161

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-275161

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-275161

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-275161" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-275161" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-275161

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275161"

                                                
                                                
----------------------- debugLogs end: false-275161 [took: 6.935413137s] --------------------------------
helpers_test.go:175: Cleaning up "false-275161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-275161
--- PASS: TestNetworkPlugins/group/false (7.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665064 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-665064 --no-kubernetes --driver=docker  --container-runtime=crio: (6.213290804s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-665064 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-665064 status -o json: exit status 2 (390.50025ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-665064","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-665064
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-665064: (2.04072633s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665064 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-665064 --no-kubernetes --driver=docker  --container-runtime=crio: (7.048302508s)
--- PASS: TestNoKubernetes/serial/Start (7.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-665064 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-665064 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.158945ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (102.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1006505714 start -p stopped-upgrade-803489 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1006505714 start -p stopped-upgrade-803489 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m15.437629572s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1006505714 -p stopped-upgrade-803489 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1006505714 -p stopped-upgrade-803489 stop: (5.659655412s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-803489 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-803489 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.728442098s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (102.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-665064
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-665064: (1.589677173s)
--- PASS: TestNoKubernetes/serial/Stop (1.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-665064 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-665064 --driver=docker  --container-runtime=crio: (7.248707827s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-665064 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-665064 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.174242ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-803489
E1216 11:09:43.798001  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-803489: (2.769133554s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.77s)

                                                
                                    
x
+
TestPause/serial/Start (40.01s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-137851 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-137851 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (40.013526078s)
--- PASS: TestPause/serial/Start (40.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.27s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-137851 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-137851 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.257391568s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.27s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-137851 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-137851 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-137851 --output=json --layout=cluster: exit status 2 (306.121063ms)

                                                
                                                
-- stdout --
	{"Name":"pause-137851","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-137851","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-137851 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-137851 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.74s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-137851 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-137851 --alsologtostderr -v=5: (2.742917271s)
--- PASS: TestPause/serial/DeletePaused (2.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-275161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-275161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (44.61843399s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.62s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-137851
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-137851: exit status 1 (25.518497ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-137851: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-275161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-275161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m7.435474678s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-275161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-275161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (46.669467748s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-275161 "pgrep -a kubelet"
I1216 11:12:13.438819  847292 config.go:182] Loaded profile config "auto-275161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-275161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qnvkl" [ad1bcbbf-48d3-4f1f-8c29-86f66c062d18] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qnvkl" [ad1bcbbf-48d3-4f1f-8c29-86f66c062d18] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003755289s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-275161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-275161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-275161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pwk92" [1982c199-9b25-43e0-a7c2-1f2765262023] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004017674s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-275161 "pgrep -a kubelet"
I1216 11:12:30.755740  847292 config.go:182] Loaded profile config "flannel-275161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-275161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kk7rr" [397f3e73-6c05-42ab-af77-1dde83894f61] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kk7rr" [397f3e73-6c05-42ab-af77-1dde83894f61] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004374263s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-275161 "pgrep -a kubelet"
I1216 11:12:38.281577  847292 config.go:182] Loaded profile config "enable-default-cni-275161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-275161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mbvjz" [2098eb7e-23cf-4f9f-a4ef-7bc54b91fe1a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mbvjz" [2098eb7e-23cf-4f9f-a4ef-7bc54b91fe1a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004268004s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-275161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-275161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-275161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-275161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-275161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (52.407625015s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-275161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-275161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-275161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-275161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-275161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (43.500062881s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-275161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-275161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m9.178309308s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-lfgf7" [0434b052-c6a5-41fb-8e27-9587551676e1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005536669s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-275161 "pgrep -a kubelet"
I1216 11:13:39.799552  847292 config.go:182] Loaded profile config "calico-275161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-275161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m9zbs" [2670443b-e822-4a60-9a4f-7980948dabd8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m9zbs" [2670443b-e822-4a60-9a4f-7980948dabd8] Running
E1216 11:13:49.179680  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00419046s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-k9kw4" [d62ff6f3-29c6-44ef-8ae3-e3d34fcbecb4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003999928s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-275161 "pgrep -a kubelet"
I1216 11:13:49.871758  847292 config.go:182] Loaded profile config "kindnet-275161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-275161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dts4n" [a2cabc0f-b521-465a-8c1a-3516b589c0a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dts4n" [a2cabc0f-b521-465a-8c1a-3516b589c0a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003440761s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-275161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-275161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-275161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-275161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-275161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-275161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-275161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-275161 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.235169999s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (139.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-417844 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-417844 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m19.8191347s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (139.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-275161 "pgrep -a kubelet"
I1216 11:14:18.297056  847292 config.go:182] Loaded profile config "bridge-275161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-275161 replace --force -f testdata/netcat-deployment.yaml
I1216 11:14:18.686192  847292 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1216 11:14:18.904872  847292 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fv67t" [9b59fc57-4f41-414e-8b8a-fd68d2efe1e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fv67t" [9b59fc57-4f41-414e-8b8a-fd68d2efe1e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004157785s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-963312 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-963312 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (57.318031021s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-275161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-275161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-275161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-803642 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-803642 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (45.592516058s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-275161 "pgrep -a kubelet"
I1216 11:15:00.459457  847292 config.go:182] Loaded profile config "custom-flannel-275161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-275161 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-94m96" [0ef74251-35ba-421b-af28-bf32286dcea9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-94m96" [0ef74251-35ba-421b-af28-bf32286dcea9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003855227s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-275161 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-275161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-275161 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-963312 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dbcf88f7-a526-435e-8c6f-472a8da1580e] Pending
helpers_test.go:344: "busybox" [dbcf88f7-a526-435e-8c6f-472a8da1580e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dbcf88f7-a526-435e-8c6f-472a8da1580e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004463686s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-963312 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-963312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-963312 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-963312 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-963312 --alsologtostderr -v=3: (11.920758248s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-807504 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-807504 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (45.419571927s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-803642 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [56bbfd1a-2694-40ba-a794-343e4efb3001] Pending
helpers_test.go:344: "busybox" [56bbfd1a-2694-40ba-a794-343e4efb3001] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [56bbfd1a-2694-40ba-a794-343e4efb3001] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003704915s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-803642 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963312 -n no-preload-963312
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963312 -n no-preload-963312: exit status 7 (70.907286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-963312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-963312 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-963312 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m22.428761105s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-963312 -n no-preload-963312
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-803642 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-803642 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-803642 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-803642 --alsologtostderr -v=3: (12.274551006s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-803642 -n embed-certs-803642
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-803642 -n embed-certs-803642: exit status 7 (94.885091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-803642 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (272.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-803642 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-803642 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m32.32191739s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-803642 -n embed-certs-803642
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (272.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-807504 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [66e1a78d-b4ca-46a7-8d07-c03c035123f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [66e1a78d-b4ca-46a7-8d07-c03c035123f7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004232305s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-807504 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-807504 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-807504 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-807504 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-807504 --alsologtostderr -v=3: (11.870167098s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-417844 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9919cb96-0184-4732-828c-a79d7cec4821] Pending
helpers_test.go:344: "busybox" [9919cb96-0184-4732-828c-a79d7cec4821] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9919cb96-0184-4732-828c-a79d7cec4821] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002793221s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-417844 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-807504 -n default-k8s-diff-port-807504
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-807504 -n default-k8s-diff-port-807504: exit status 7 (72.469389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-807504 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-807504 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-807504 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m21.738744194s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-807504 -n default-k8s-diff-port-807504
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-417844 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-417844 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-417844 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-417844 --alsologtostderr -v=3: (11.905374216s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-417844 -n old-k8s-version-417844
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-417844 -n old-k8s-version-417844: exit status 7 (84.783157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-417844 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (138.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-417844 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1216 11:17:13.597887  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:13.604265  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:13.615626  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:13.637040  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:13.678384  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:13.759813  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:13.921530  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:14.243217  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:14.885259  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:16.167446  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:18.729165  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:23.850566  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:24.475381  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:24.481767  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:24.493109  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:24.514491  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:24.555797  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:24.637226  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:24.799179  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:25.120529  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:25.762816  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:27.044863  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:29.606224  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:34.092817  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:34.727928  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:38.470016  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:38.476415  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:38.487761  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:38.509140  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:38.550501  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:38.631952  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:38.793646  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:39.115310  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:39.757411  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:41.039072  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:43.600494  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:44.969917  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:46.865641  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:48.722886  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:54.574146  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:17:58.964607  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:05.451389  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:19.446832  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:33.506494  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:33.512861  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:33.524249  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:33.545703  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:33.587137  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:33.668908  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:33.830536  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:34.151855  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:34.793277  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:35.535609  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:36.075596  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:38.637137  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:43.615154  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:43.621560  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:43.632928  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:43.654326  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:43.695849  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:43.759206  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:43.777571  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:43.939500  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:44.261344  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:44.902928  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:46.184670  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:46.413278  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:48.746898  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:49.179332  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/functional-003749/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:53.868612  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:18:54.001122  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:19:00.409156  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/enable-default-cni-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:19:04.109962  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-417844 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m18.145869249s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-417844 -n old-k8s-version-417844
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (138.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-q7hnx" [72f00f65-2b23-4e89-8083-c2dc9d43edda] Running
E1216 11:19:14.482465  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004299418s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-q7hnx" [72f00f65-2b23-4e89-8083-c2dc9d43edda] Running
E1216 11:19:18.680679  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:19:18.687019  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:19:18.698433  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:19:18.719730  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:19:18.761050  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:19:18.842436  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:19:19.003769  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:19:19.325707  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:19:19.966985  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:19:21.248572  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00381693s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-417844 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-417844 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-417844 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-417844 -n old-k8s-version-417844
E1216 11:19:23.810130  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-417844 -n old-k8s-version-417844: exit status 2 (285.531282ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-417844 -n old-k8s-version-417844
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-417844 -n old-k8s-version-417844: exit status 2 (280.241926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-417844 --alsologtostderr -v=1
E1216 11:19:24.591877  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-417844 -n old-k8s-version-417844
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-417844 -n old-k8s-version-417844
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-594624 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1216 11:19:28.932582  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:19:39.173874  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:19:43.797173  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/addons-109663/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:19:55.444443  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/calico-275161/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-594624 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (28.656407517s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-594624 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1216 11:19:57.457370  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/auto-275161/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-594624 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.151844102s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-594624 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-594624 --alsologtostderr -v=3: (1.192537802s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-594624 -n newest-cni-594624
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-594624 -n newest-cni-594624: exit status 7 (65.376131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-594624 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-594624 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1216 11:19:59.655883  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:20:00.627200  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/custom-flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:20:00.633604  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/custom-flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:20:00.645640  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/custom-flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:20:00.666964  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/custom-flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:20:00.708751  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/custom-flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:20:00.790241  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/custom-flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:20:00.951713  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/custom-flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:20:01.273862  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/custom-flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:20:01.915867  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/custom-flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:20:03.197297  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/custom-flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-594624 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (12.281899427s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-594624 -n newest-cni-594624
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dd2dk" [410f6a30-33eb-4b51-93b5-973e287cfa23] Running
E1216 11:20:05.554174  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/kindnet-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:20:05.759330  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/custom-flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:20:08.335104  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004105285s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dd2dk" [410f6a30-33eb-4b51-93b5-973e287cfa23] Running
E1216 11:20:10.881666  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/custom-flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003653088s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-963312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-594624 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-594624 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-594624 -n newest-cni-594624
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-594624 -n newest-cni-594624: exit status 2 (279.881647ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-594624 -n newest-cni-594624
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-594624 -n newest-cni-594624: exit status 2 (275.297282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-594624 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-594624 -n newest-cni-594624
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-594624 -n newest-cni-594624
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-963312 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-963312 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-963312 -n no-preload-963312
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-963312 -n no-preload-963312: exit status 2 (286.108385ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-963312 -n no-preload-963312
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-963312 -n no-preload-963312: exit status 2 (293.715397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-963312 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-963312 -n no-preload-963312
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-963312 -n no-preload-963312
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4hgc5" [7a58d168-cc5e-4de4-b3c3-ab2c7916c1e2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003373117s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4hgc5" [7a58d168-cc5e-4de4-b3c3-ab2c7916c1e2] Running
E1216 11:20:40.617251  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/bridge-275161/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:20:41.604679  847292 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-840384/.minikube/profiles/custom-flannel-275161/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003243809s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-803642 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-803642 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-803642 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-803642 -n embed-certs-803642
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-803642 -n embed-certs-803642: exit status 2 (287.658598ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-803642 -n embed-certs-803642
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-803642 -n embed-certs-803642: exit status 2 (278.236678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-803642 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-803642 -n embed-certs-803642
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-803642 -n embed-certs-803642
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vgwgh" [bc2eb226-1746-4e09-9044-f7a5c1957c31] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003474467s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vgwgh" [bc2eb226-1746-4e09-9044-f7a5c1957c31] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003614052s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-807504 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-807504 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-807504 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-807504 -n default-k8s-diff-port-807504
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-807504 -n default-k8s-diff-port-807504: exit status 2 (271.391706ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-807504 -n default-k8s-diff-port-807504
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-807504 -n default-k8s-diff-port-807504: exit status 2 (274.028383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-807504 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-807504 -n default-k8s-diff-port-807504
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-807504 -n default-k8s-diff-port-807504
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.42s)

                                                
                                    

Test skip (26/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-109663 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-275161 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-275161

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-275161

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-275161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-275161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-275161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-275161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-275161

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-275161

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-275161

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-275161

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-275161

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-275161" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-275161" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-275161

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275161"

                                                
                                                
----------------------- debugLogs end: kubenet-275161 [took: 3.835618381s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-275161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-275161
--- SKIP: TestNetworkPlugins/group/kubenet (4.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-275161 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-275161" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-275161

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-275161" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275161"

                                                
                                                
----------------------- debugLogs end: cilium-275161 [took: 3.726305239s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-275161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-275161
--- SKIP: TestNetworkPlugins/group/cilium (3.88s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-628761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-628761
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard