Test Report: Docker_Linux_crio 20363

                    
                      7e7f32fac0d8189b7e029c65d7fa3a0906f68836:2025-02-05:38218
                    
                

Test fail (6/324)

Order failed test Duration
36 TestAddons/parallel/Ingress 151.51
99 TestFunctional/parallel/PersistentVolumeClaim 187.87
103 TestFunctional/parallel/MySQL 602.73
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 240.6
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 104.36
318 TestNetworkPlugins/group/flannel/Start 266
x
+
TestAddons/parallel/Ingress (151.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-217306 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-217306 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-217306 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7b1d5fc5-a7da-49f7-974d-5b5f465db6b4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7b1d5fc5-a7da-49f7-974d-5b5f465db6b4] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.002358269s
I0205 02:06:19.807389   19390 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-217306 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.58003066s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-217306 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-217306
helpers_test.go:235: (dbg) docker inspect addons-217306:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe98dd6bd6c16a7288117c1fff90ee41fdfbaf8a0d6539ec00798251897e53f1",
	        "Created": "2025-02-05T02:04:02.272331746Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 21439,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-05T02:04:02.408600626Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a",
	        "ResolvConfPath": "/var/lib/docker/containers/fe98dd6bd6c16a7288117c1fff90ee41fdfbaf8a0d6539ec00798251897e53f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe98dd6bd6c16a7288117c1fff90ee41fdfbaf8a0d6539ec00798251897e53f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe98dd6bd6c16a7288117c1fff90ee41fdfbaf8a0d6539ec00798251897e53f1/hosts",
	        "LogPath": "/var/lib/docker/containers/fe98dd6bd6c16a7288117c1fff90ee41fdfbaf8a0d6539ec00798251897e53f1/fe98dd6bd6c16a7288117c1fff90ee41fdfbaf8a0d6539ec00798251897e53f1-json.log",
	        "Name": "/addons-217306",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-217306:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-217306",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a8ff34d6d671a3f8acea848f048bf51884500521ac17ca4496c7745c9b68a666-init/diff:/var/lib/docker/overlay2/f186c7f5b5e3359a3aedb1825f83d9f64c1bd7ca8cd203398cd99d9b6a74d20a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8ff34d6d671a3f8acea848f048bf51884500521ac17ca4496c7745c9b68a666/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8ff34d6d671a3f8acea848f048bf51884500521ac17ca4496c7745c9b68a666/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8ff34d6d671a3f8acea848f048bf51884500521ac17ca4496c7745c9b68a666/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-217306",
	                "Source": "/var/lib/docker/volumes/addons-217306/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-217306",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-217306",
	                "name.minikube.sigs.k8s.io": "addons-217306",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "45910e0c1c2b37e1fdf0bcaa566b0753f059003f4cb46010f245e84d5dc87661",
	            "SandboxKey": "/var/run/docker/netns/45910e0c1c2b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-217306": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "aed0139843251563819d177b1385b0bea64e90ba5b93772bc981dd7efe244c93",
	                    "EndpointID": "a83fd021baa7d104ba0cbfbbea3880e1bc21003cf0a798dc4f46b740dcb163d3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-217306",
	                        "fe98dd6bd6c1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-217306 -n addons-217306
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-217306 logs -n 25: (1.084963169s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-777609 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC |                     |
	|         | download-docker-777609                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-777609                                                                   | download-docker-777609 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC | 05 Feb 25 02:03 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-310251   | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC |                     |
	|         | binary-mirror-310251                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37147                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-310251                                                                     | binary-mirror-310251   | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC | 05 Feb 25 02:03 UTC |
	| addons  | enable dashboard -p                                                                         | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC |                     |
	|         | addons-217306                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC |                     |
	|         | addons-217306                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-217306 --wait=true                                                                | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC | 05 Feb 25 02:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-217306 addons disable                                                                | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:05 UTC | 05 Feb 25 02:05 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-217306 addons disable                                                                | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:05 UTC | 05 Feb 25 02:05 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:05 UTC | 05 Feb 25 02:05 UTC |
	|         | -p addons-217306                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-217306 addons                                                                        | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:05 UTC | 05 Feb 25 02:05 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-217306 addons disable                                                                | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:05 UTC | 05 Feb 25 02:05 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-217306 addons                                                                        | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-217306 addons disable                                                                | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-217306 ip                                                                            | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	| addons  | addons-217306 addons disable                                                                | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-217306 ssh cat                                                                       | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | /opt/local-path-provisioner/pvc-4029b30c-10c6-440c-9aa0-78582bd94f12_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-217306 addons                                                                        | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-217306 addons disable                                                                | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-217306 addons                                                                        | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-217306 ssh curl -s                                                                   | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-217306 addons disable                                                                | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-217306 addons                                                                        | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:06 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-217306 addons                                                                        | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:06 UTC | 05 Feb 25 02:07 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-217306 ip                                                                            | addons-217306          | jenkins | v1.35.0 | 05 Feb 25 02:08 UTC | 05 Feb 25 02:08 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 02:03:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 02:03:38.134959   20697 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:03:38.135069   20697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:03:38.135077   20697 out.go:358] Setting ErrFile to fd 2...
	I0205 02:03:38.135085   20697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:03:38.135261   20697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
	I0205 02:03:38.135860   20697 out.go:352] Setting JSON to false
	I0205 02:03:38.136665   20697 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2764,"bootTime":1738718254,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:03:38.136762   20697 start.go:139] virtualization: kvm guest
	I0205 02:03:38.138817   20697 out.go:177] * [addons-217306] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 02:03:38.140313   20697 notify.go:220] Checking for updates...
	I0205 02:03:38.140357   20697 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 02:03:38.141754   20697 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:03:38.143147   20697 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	I0205 02:03:38.144419   20697 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	I0205 02:03:38.145821   20697 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 02:03:38.147158   20697 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 02:03:38.148611   20697 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:03:38.171034   20697 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0205 02:03:38.171150   20697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:03:38.217726   20697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-05 02:03:38.20937703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:03:38.217835   20697 docker.go:318] overlay module found
	I0205 02:03:38.219582   20697 out.go:177] * Using the docker driver based on user configuration
	I0205 02:03:38.220715   20697 start.go:297] selected driver: docker
	I0205 02:03:38.220726   20697 start.go:901] validating driver "docker" against <nil>
	I0205 02:03:38.220741   20697 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 02:03:38.221510   20697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:03:38.265272   20697 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-05 02:03:38.257345346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:03:38.265428   20697 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0205 02:03:38.265736   20697 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 02:03:38.267765   20697 out.go:177] * Using Docker driver with root privileges
	I0205 02:03:38.269389   20697 cni.go:84] Creating CNI manager for ""
	I0205 02:03:38.269462   20697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0205 02:03:38.269478   20697 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0205 02:03:38.269572   20697 start.go:340] cluster config:
	{Name:addons-217306 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-217306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0205 02:03:38.271603   20697 out.go:177] * Starting "addons-217306" primary control-plane node in "addons-217306" cluster
	I0205 02:03:38.273283   20697 cache.go:121] Beginning downloading kic base image for docker with crio
	I0205 02:03:38.274950   20697 out.go:177] * Pulling base image v0.0.46 ...
	I0205 02:03:38.276507   20697 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 02:03:38.276556   20697 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 02:03:38.276566   20697 cache.go:56] Caching tarball of preloaded images
	I0205 02:03:38.276621   20697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0205 02:03:38.276659   20697 preload.go:172] Found /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0205 02:03:38.276675   20697 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0205 02:03:38.277027   20697 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/config.json ...
	I0205 02:03:38.277065   20697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/config.json: {Name:mk8890f9f6006b9fb414e5558e7d21e9f27b64a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:03:38.294347   20697 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0205 02:03:38.294490   20697 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0205 02:03:38.294513   20697 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0205 02:03:38.294524   20697 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0205 02:03:38.294533   20697 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0205 02:03:38.294538   20697 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from local cache
	I0205 02:03:50.019634   20697 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from cached tarball
	I0205 02:03:50.019681   20697 cache.go:230] Successfully downloaded all kic artifacts
	I0205 02:03:50.019727   20697 start.go:360] acquireMachinesLock for addons-217306: {Name:mk808aedec02a15fb72e06ddf536c72939f48aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 02:03:50.019852   20697 start.go:364] duration metric: took 100.744µs to acquireMachinesLock for "addons-217306"
	I0205 02:03:50.019885   20697 start.go:93] Provisioning new machine with config: &{Name:addons-217306 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-217306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 02:03:50.019962   20697 start.go:125] createHost starting for "" (driver="docker")
	I0205 02:03:50.021621   20697 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0205 02:03:50.021843   20697 start.go:159] libmachine.API.Create for "addons-217306" (driver="docker")
	I0205 02:03:50.021881   20697 client.go:168] LocalClient.Create starting
	I0205 02:03:50.021985   20697 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca.pem
	I0205 02:03:50.134356   20697 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/cert.pem
	I0205 02:03:50.245755   20697 cli_runner.go:164] Run: docker network inspect addons-217306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0205 02:03:50.261112   20697 cli_runner.go:211] docker network inspect addons-217306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0205 02:03:50.261176   20697 network_create.go:284] running [docker network inspect addons-217306] to gather additional debugging logs...
	I0205 02:03:50.261194   20697 cli_runner.go:164] Run: docker network inspect addons-217306
	W0205 02:03:50.276581   20697 cli_runner.go:211] docker network inspect addons-217306 returned with exit code 1
	I0205 02:03:50.276610   20697 network_create.go:287] error running [docker network inspect addons-217306]: docker network inspect addons-217306: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-217306 not found
	I0205 02:03:50.276638   20697 network_create.go:289] output of [docker network inspect addons-217306]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-217306 not found
	
	** /stderr **
	I0205 02:03:50.277115   20697 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0205 02:03:50.293535   20697 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016ac780}
	I0205 02:03:50.293606   20697 network_create.go:124] attempt to create docker network addons-217306 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0205 02:03:50.293661   20697 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-217306 addons-217306
	I0205 02:03:50.350831   20697 network_create.go:108] docker network addons-217306 192.168.49.0/24 created
	I0205 02:03:50.350864   20697 kic.go:121] calculated static IP "192.168.49.2" for the "addons-217306" container
	I0205 02:03:50.350929   20697 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0205 02:03:50.368118   20697 cli_runner.go:164] Run: docker volume create addons-217306 --label name.minikube.sigs.k8s.io=addons-217306 --label created_by.minikube.sigs.k8s.io=true
	I0205 02:03:50.386504   20697 oci.go:103] Successfully created a docker volume addons-217306
	I0205 02:03:50.386573   20697 cli_runner.go:164] Run: docker run --rm --name addons-217306-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-217306 --entrypoint /usr/bin/test -v addons-217306:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0205 02:03:57.552786   20697 cli_runner.go:217] Completed: docker run --rm --name addons-217306-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-217306 --entrypoint /usr/bin/test -v addons-217306:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib: (7.166176574s)
	I0205 02:03:57.552821   20697 oci.go:107] Successfully prepared a docker volume addons-217306
	I0205 02:03:57.552841   20697 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 02:03:57.552866   20697 kic.go:194] Starting extracting preloaded images to volume ...
	I0205 02:03:57.552918   20697 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-217306:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0205 02:04:02.210050   20697 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-217306:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.657093301s)
	I0205 02:04:02.210081   20697 kic.go:203] duration metric: took 4.657214202s to extract preloaded images to volume ...
	W0205 02:04:02.210206   20697 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0205 02:04:02.210311   20697 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0205 02:04:02.258019   20697 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-217306 --name addons-217306 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-217306 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-217306 --network addons-217306 --ip 192.168.49.2 --volume addons-217306:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0205 02:04:02.584347   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Running}}
	I0205 02:04:02.602065   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:02.620371   20697 cli_runner.go:164] Run: docker exec addons-217306 stat /var/lib/dpkg/alternatives/iptables
	I0205 02:04:02.659949   20697 oci.go:144] the created container "addons-217306" has a running status.
	I0205 02:04:02.660000   20697 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa...
	I0205 02:04:02.844429   20697 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0205 02:04:02.868913   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:02.887717   20697 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0205 02:04:02.887739   20697 kic_runner.go:114] Args: [docker exec --privileged addons-217306 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0205 02:04:02.940662   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:02.965667   20697 machine.go:93] provisionDockerMachine start ...
	I0205 02:04:02.965742   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:02.985272   20697 main.go:141] libmachine: Using SSH client type: native
	I0205 02:04:02.985491   20697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0205 02:04:02.985504   20697 main.go:141] libmachine: About to run SSH command:
	hostname
	I0205 02:04:03.200696   20697 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-217306
	
	I0205 02:04:03.200722   20697 ubuntu.go:169] provisioning hostname "addons-217306"
	I0205 02:04:03.200796   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:03.218617   20697 main.go:141] libmachine: Using SSH client type: native
	I0205 02:04:03.218831   20697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0205 02:04:03.218872   20697 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-217306 && echo "addons-217306" | sudo tee /etc/hostname
	I0205 02:04:03.355125   20697 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-217306
	
	I0205 02:04:03.355198   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:03.371321   20697 main.go:141] libmachine: Using SSH client type: native
	I0205 02:04:03.371480   20697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0205 02:04:03.371500   20697 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-217306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-217306/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-217306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 02:04:03.493462   20697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 02:04:03.493489   20697 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12617/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12617/.minikube}
	I0205 02:04:03.493534   20697 ubuntu.go:177] setting up certificates
	I0205 02:04:03.493564   20697 provision.go:84] configureAuth start
	I0205 02:04:03.493622   20697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-217306
	I0205 02:04:03.509779   20697 provision.go:143] copyHostCerts
	I0205 02:04:03.509876   20697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12617/.minikube/ca.pem (1078 bytes)
	I0205 02:04:03.510014   20697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12617/.minikube/cert.pem (1123 bytes)
	I0205 02:04:03.510100   20697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12617/.minikube/key.pem (1679 bytes)
	I0205 02:04:03.510162   20697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca-key.pem org=jenkins.addons-217306 san=[127.0.0.1 192.168.49.2 addons-217306 localhost minikube]
	I0205 02:04:03.743670   20697 provision.go:177] copyRemoteCerts
	I0205 02:04:03.743722   20697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 02:04:03.743753   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:03.760640   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:03.849467   20697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0205 02:04:03.869940   20697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0205 02:04:03.890151   20697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0205 02:04:03.910586   20697 provision.go:87] duration metric: took 417.001526ms to configureAuth
	I0205 02:04:03.910614   20697 ubuntu.go:193] setting minikube options for container-runtime
	I0205 02:04:03.910772   20697 config.go:182] Loaded profile config "addons-217306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:04:03.910861   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:03.927084   20697 main.go:141] libmachine: Using SSH client type: native
	I0205 02:04:03.927256   20697 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0205 02:04:03.927272   20697 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 02:04:04.130930   20697 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 02:04:04.130957   20697 machine.go:96] duration metric: took 1.165269894s to provisionDockerMachine
	I0205 02:04:04.130968   20697 client.go:171] duration metric: took 14.109076788s to LocalClient.Create
	I0205 02:04:04.130987   20697 start.go:167] duration metric: took 14.109144537s to libmachine.API.Create "addons-217306"
	I0205 02:04:04.130998   20697 start.go:293] postStartSetup for "addons-217306" (driver="docker")
	I0205 02:04:04.131017   20697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 02:04:04.131097   20697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 02:04:04.131138   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:04.147612   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:04.237717   20697 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 02:04:04.240536   20697 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0205 02:04:04.240562   20697 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0205 02:04:04.240571   20697 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0205 02:04:04.240578   20697 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0205 02:04:04.240588   20697 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12617/.minikube/addons for local assets ...
	I0205 02:04:04.240631   20697 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12617/.minikube/files for local assets ...
	I0205 02:04:04.240658   20697 start.go:296] duration metric: took 109.65451ms for postStartSetup
	I0205 02:04:04.240898   20697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-217306
	I0205 02:04:04.257806   20697 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/config.json ...
	I0205 02:04:04.258038   20697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0205 02:04:04.258085   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:04.274208   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:04.365854   20697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0205 02:04:04.369571   20697 start.go:128] duration metric: took 14.349579613s to createHost
	I0205 02:04:04.369594   20697 start.go:83] releasing machines lock for "addons-217306", held for 14.349725152s
	I0205 02:04:04.369642   20697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-217306
	I0205 02:04:04.385922   20697 ssh_runner.go:195] Run: cat /version.json
	I0205 02:04:04.385977   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:04.385994   20697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 02:04:04.386045   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:04.402811   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:04.403016   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:04.563238   20697 ssh_runner.go:195] Run: systemctl --version
	I0205 02:04:04.567124   20697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 02:04:04.701130   20697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0205 02:04:04.705166   20697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 02:04:04.722489   20697 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0205 02:04:04.722585   20697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 02:04:04.747362   20697 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0205 02:04:04.747385   20697 start.go:495] detecting cgroup driver to use...
	I0205 02:04:04.747419   20697 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0205 02:04:04.747455   20697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 02:04:04.760604   20697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 02:04:04.770173   20697 docker.go:217] disabling cri-docker service (if available) ...
	I0205 02:04:04.770213   20697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 02:04:04.781451   20697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 02:04:04.792646   20697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 02:04:04.864117   20697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 02:04:04.934772   20697 docker.go:233] disabling docker service ...
	I0205 02:04:04.934835   20697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 02:04:04.950970   20697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 02:04:04.960607   20697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 02:04:05.038798   20697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 02:04:05.106109   20697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 02:04:05.115556   20697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 02:04:05.128587   20697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0205 02:04:05.128627   20697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:04:05.136331   20697 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 02:04:05.136373   20697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:04:05.143954   20697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:04:05.151517   20697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:04:05.159189   20697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 02:04:05.166730   20697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:04:05.174898   20697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:04:05.187892   20697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:04:05.195718   20697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 02:04:05.202514   20697 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0205 02:04:05.202554   20697 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0205 02:04:05.214159   20697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 02:04:05.221038   20697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 02:04:05.285666   20697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 02:04:05.388933   20697 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 02:04:05.389001   20697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 02:04:05.392187   20697 start.go:563] Will wait 60s for crictl version
	I0205 02:04:05.392235   20697 ssh_runner.go:195] Run: which crictl
	I0205 02:04:05.394898   20697 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 02:04:05.424276   20697 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0205 02:04:05.424410   20697 ssh_runner.go:195] Run: crio --version
	I0205 02:04:05.456346   20697 ssh_runner.go:195] Run: crio --version
	I0205 02:04:05.489493   20697 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	I0205 02:04:05.490842   20697 cli_runner.go:164] Run: docker network inspect addons-217306 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0205 02:04:05.506541   20697 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0205 02:04:05.509781   20697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 02:04:05.518853   20697 kubeadm.go:883] updating cluster {Name:addons-217306 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-217306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 02:04:05.518987   20697 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 02:04:05.519044   20697 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 02:04:05.580896   20697 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 02:04:05.580916   20697 crio.go:433] Images already preloaded, skipping extraction
	I0205 02:04:05.580954   20697 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 02:04:05.610282   20697 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 02:04:05.610305   20697 cache_images.go:84] Images are preloaded, skipping loading
	I0205 02:04:05.610312   20697 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.1 crio true true} ...
	I0205 02:04:05.610406   20697 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-217306 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-217306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0205 02:04:05.610483   20697 ssh_runner.go:195] Run: crio config
	I0205 02:04:05.648619   20697 cni.go:84] Creating CNI manager for ""
	I0205 02:04:05.648644   20697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0205 02:04:05.648655   20697 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 02:04:05.648682   20697 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-217306 NodeName:addons-217306 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0205 02:04:05.648818   20697 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-217306"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 02:04:05.648889   20697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0205 02:04:05.656478   20697 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 02:04:05.656537   20697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 02:04:05.663690   20697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0205 02:04:05.678513   20697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 02:04:05.693314   20697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0205 02:04:05.707888   20697 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0205 02:04:05.710723   20697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 02:04:05.719909   20697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 02:04:05.794073   20697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 02:04:05.805096   20697 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306 for IP: 192.168.49.2
	I0205 02:04:05.805121   20697 certs.go:194] generating shared ca certs ...
	I0205 02:04:05.805141   20697 certs.go:226] acquiring lock for ca certs: {Name:mkf47158da08358d0aa679f4aa239783b5be6e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:05.805270   20697 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12617/.minikube/ca.key
	I0205 02:04:05.884349   20697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt ...
	I0205 02:04:05.884375   20697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt: {Name:mk82dff90a4458b0424fbcb0e049eeca764a5e63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:05.884524   20697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12617/.minikube/ca.key ...
	I0205 02:04:05.884535   20697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/ca.key: {Name:mkb8a361f390b363e16573fe4a1434ea3eb9a662 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:05.884604   20697 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12617/.minikube/proxy-client-ca.key
	I0205 02:04:06.040932   20697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12617/.minikube/proxy-client-ca.crt ...
	I0205 02:04:06.040958   20697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/proxy-client-ca.crt: {Name:mk08bf99505f537829eae9433f0768e9f25b224c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:06.041099   20697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12617/.minikube/proxy-client-ca.key ...
	I0205 02:04:06.041109   20697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/proxy-client-ca.key: {Name:mkd826cade6e3086ac4becdde11cb753d72601f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:06.041175   20697 certs.go:256] generating profile certs ...
	I0205 02:04:06.041225   20697 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.key
	I0205 02:04:06.041238   20697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt with IP's: []
	I0205 02:04:06.191860   20697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt ...
	I0205 02:04:06.191893   20697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: {Name:mk4fe6503977d965731329cd77563cfc5a96f25f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:06.192058   20697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.key ...
	I0205 02:04:06.192069   20697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.key: {Name:mkb625e8e524eaefdb1732c10dc7061869ee2d30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:06.192137   20697 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/apiserver.key.713f3192
	I0205 02:04:06.192155   20697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/apiserver.crt.713f3192 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0205 02:04:06.296590   20697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/apiserver.crt.713f3192 ...
	I0205 02:04:06.296619   20697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/apiserver.crt.713f3192: {Name:mk95f8a78466b89ca5b603509d91b3bea8d4208f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:06.296766   20697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/apiserver.key.713f3192 ...
	I0205 02:04:06.296779   20697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/apiserver.key.713f3192: {Name:mk810e6f955d2a71499aac57f31d9c975c3e0578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:06.296849   20697 certs.go:381] copying /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/apiserver.crt.713f3192 -> /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/apiserver.crt
	I0205 02:04:06.296919   20697 certs.go:385] copying /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/apiserver.key.713f3192 -> /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/apiserver.key
	I0205 02:04:06.296962   20697 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/proxy-client.key
	I0205 02:04:06.296980   20697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/proxy-client.crt with IP's: []
	I0205 02:04:06.365680   20697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/proxy-client.crt ...
	I0205 02:04:06.365705   20697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/proxy-client.crt: {Name:mkcbaf60cbe60c4e774f97602bb046ca5917c4cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:06.365846   20697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/proxy-client.key ...
	I0205 02:04:06.365856   20697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/proxy-client.key: {Name:mk907acd115b490dae9392294d623ca9af65ca95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:06.366022   20697 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca-key.pem (1675 bytes)
	I0205 02:04:06.366053   20697 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca.pem (1078 bytes)
	I0205 02:04:06.366079   20697 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/cert.pem (1123 bytes)
	I0205 02:04:06.366105   20697 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/key.pem (1679 bytes)
	I0205 02:04:06.366679   20697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 02:04:06.388088   20697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 02:04:06.407489   20697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 02:04:06.427411   20697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0205 02:04:06.447525   20697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0205 02:04:06.467423   20697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0205 02:04:06.487705   20697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 02:04:06.509106   20697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0205 02:04:06.529190   20697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 02:04:06.549374   20697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 02:04:06.564294   20697 ssh_runner.go:195] Run: openssl version
	I0205 02:04:06.569104   20697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 02:04:06.576953   20697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 02:04:06.579828   20697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 02:04:06.579867   20697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 02:04:06.585833   20697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0205 02:04:06.593674   20697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0205 02:04:06.596408   20697 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0205 02:04:06.596454   20697 kubeadm.go:392] StartCluster: {Name:addons-217306 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-217306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:04:06.596583   20697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0205 02:04:06.596634   20697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 02:04:06.627223   20697 cri.go:89] found id: ""
	I0205 02:04:06.627288   20697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0205 02:04:06.634934   20697 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0205 02:04:06.642347   20697 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0205 02:04:06.642395   20697 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 02:04:06.649399   20697 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 02:04:06.649415   20697 kubeadm.go:157] found existing configuration files:
	
	I0205 02:04:06.649459   20697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 02:04:06.656551   20697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 02:04:06.656609   20697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 02:04:06.663633   20697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 02:04:06.670638   20697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 02:04:06.670748   20697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 02:04:06.677493   20697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 02:04:06.684535   20697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 02:04:06.684572   20697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 02:04:06.691522   20697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 02:04:06.698375   20697 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 02:04:06.698410   20697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 02:04:06.705229   20697 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0205 02:04:06.754282   20697 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0205 02:04:06.754509   20697 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-gcp\n", err: exit status 1
	I0205 02:04:06.803003   20697 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 02:04:14.813077   20697 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0205 02:04:14.813123   20697 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 02:04:14.813193   20697 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0205 02:04:14.813238   20697 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-gcp
	I0205 02:04:14.813267   20697 kubeadm.go:310] OS: Linux
	I0205 02:04:14.813308   20697 kubeadm.go:310] CGROUPS_CPU: enabled
	I0205 02:04:14.813346   20697 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0205 02:04:14.813414   20697 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0205 02:04:14.813466   20697 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0205 02:04:14.813508   20697 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0205 02:04:14.813594   20697 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0205 02:04:14.813640   20697 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0205 02:04:14.813707   20697 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0205 02:04:14.813749   20697 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0205 02:04:14.813818   20697 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 02:04:14.813895   20697 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 02:04:14.813969   20697 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0205 02:04:14.814018   20697 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 02:04:14.815738   20697 out.go:235]   - Generating certificates and keys ...
	I0205 02:04:14.815805   20697 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 02:04:14.815876   20697 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 02:04:14.815948   20697 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0205 02:04:14.816003   20697 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0205 02:04:14.816055   20697 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0205 02:04:14.816102   20697 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0205 02:04:14.816145   20697 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0205 02:04:14.816246   20697 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-217306 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0205 02:04:14.816289   20697 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0205 02:04:14.816382   20697 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-217306 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0205 02:04:14.816434   20697 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0205 02:04:14.816484   20697 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0205 02:04:14.816524   20697 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0205 02:04:14.816569   20697 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 02:04:14.816647   20697 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 02:04:14.816746   20697 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0205 02:04:14.816798   20697 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 02:04:14.816861   20697 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 02:04:14.816910   20697 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 02:04:14.816982   20697 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 02:04:14.817045   20697 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 02:04:14.818551   20697 out.go:235]   - Booting up control plane ...
	I0205 02:04:14.818626   20697 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 02:04:14.818694   20697 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 02:04:14.818773   20697 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 02:04:14.818867   20697 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 02:04:14.818935   20697 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 02:04:14.818967   20697 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 02:04:14.819072   20697 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0205 02:04:14.819163   20697 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0205 02:04:14.819261   20697 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.529396ms
	I0205 02:04:14.819362   20697 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0205 02:04:14.819432   20697 kubeadm.go:310] [api-check] The API server is healthy after 4.001369597s
	I0205 02:04:14.819569   20697 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0205 02:04:14.819736   20697 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0205 02:04:14.819824   20697 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0205 02:04:14.820089   20697 kubeadm.go:310] [mark-control-plane] Marking the node addons-217306 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0205 02:04:14.820138   20697 kubeadm.go:310] [bootstrap-token] Using token: yzrkzd.b2fnkmrkmbuj3rnw
	I0205 02:04:14.822431   20697 out.go:235]   - Configuring RBAC rules ...
	I0205 02:04:14.822521   20697 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0205 02:04:14.822607   20697 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0205 02:04:14.822735   20697 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0205 02:04:14.822881   20697 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0205 02:04:14.822991   20697 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0205 02:04:14.823093   20697 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0205 02:04:14.823199   20697 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0205 02:04:14.823238   20697 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0205 02:04:14.823279   20697 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0205 02:04:14.823285   20697 kubeadm.go:310] 
	I0205 02:04:14.823338   20697 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0205 02:04:14.823345   20697 kubeadm.go:310] 
	I0205 02:04:14.823418   20697 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0205 02:04:14.823429   20697 kubeadm.go:310] 
	I0205 02:04:14.823471   20697 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0205 02:04:14.823529   20697 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0205 02:04:14.823570   20697 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0205 02:04:14.823576   20697 kubeadm.go:310] 
	I0205 02:04:14.823627   20697 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0205 02:04:14.823633   20697 kubeadm.go:310] 
	I0205 02:04:14.823671   20697 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0205 02:04:14.823677   20697 kubeadm.go:310] 
	I0205 02:04:14.823716   20697 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0205 02:04:14.823775   20697 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0205 02:04:14.823837   20697 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0205 02:04:14.823848   20697 kubeadm.go:310] 
	I0205 02:04:14.823916   20697 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0205 02:04:14.824009   20697 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0205 02:04:14.824020   20697 kubeadm.go:310] 
	I0205 02:04:14.824096   20697 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yzrkzd.b2fnkmrkmbuj3rnw \
	I0205 02:04:14.824195   20697 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4f5b0b470d86181f8e656721a5e49e4a405b9f662421ec1e549cfda981306944 \
	I0205 02:04:14.824219   20697 kubeadm.go:310] 	--control-plane 
	I0205 02:04:14.824225   20697 kubeadm.go:310] 
	I0205 02:04:14.824319   20697 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0205 02:04:14.824337   20697 kubeadm.go:310] 
	I0205 02:04:14.824433   20697 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yzrkzd.b2fnkmrkmbuj3rnw \
	I0205 02:04:14.824546   20697 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4f5b0b470d86181f8e656721a5e49e4a405b9f662421ec1e549cfda981306944 
	I0205 02:04:14.824557   20697 cni.go:84] Creating CNI manager for ""
	I0205 02:04:14.824562   20697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0205 02:04:14.826852   20697 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0205 02:04:14.828238   20697 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0205 02:04:14.831930   20697 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0205 02:04:14.831947   20697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0205 02:04:14.848771   20697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0205 02:04:15.041132   20697 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0205 02:04:15.041206   20697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:15.041240   20697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-217306 minikube.k8s.io/updated_at=2025_02_05T02_04_15_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d minikube.k8s.io/name=addons-217306 minikube.k8s.io/primary=true
	I0205 02:04:15.047963   20697 ops.go:34] apiserver oom_adj: -16
	I0205 02:04:15.143157   20697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:15.643798   20697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:16.143382   20697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:16.643987   20697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:17.143556   20697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:17.643879   20697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:18.143218   20697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:18.643697   20697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:19.144239   20697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:04:19.205362   20697 kubeadm.go:1113] duration metric: took 4.164216852s to wait for elevateKubeSystemPrivileges
	I0205 02:04:19.205397   20697 kubeadm.go:394] duration metric: took 12.608946605s to StartCluster
	I0205 02:04:19.205420   20697 settings.go:142] acquiring lock: {Name:mk9276b273f579f5d6fc4784e85dc48e5e91aadf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:19.205533   20697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20363-12617/kubeconfig
	I0205 02:04:19.205920   20697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/kubeconfig: {Name:mk409188e78b16bca4bb55c54818efe1c75fa3a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:04:19.206094   20697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0205 02:04:19.206112   20697 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 02:04:19.206166   20697 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0205 02:04:19.206272   20697 addons.go:69] Setting yakd=true in profile "addons-217306"
	I0205 02:04:19.206285   20697 addons.go:69] Setting volcano=true in profile "addons-217306"
	I0205 02:04:19.206297   20697 addons.go:238] Setting addon yakd=true in "addons-217306"
	I0205 02:04:19.206298   20697 addons.go:69] Setting inspektor-gadget=true in profile "addons-217306"
	I0205 02:04:19.206312   20697 addons.go:238] Setting addon volcano=true in "addons-217306"
	I0205 02:04:19.206305   20697 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-217306"
	I0205 02:04:19.206335   20697 config.go:182] Loaded profile config "addons-217306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:04:19.206343   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.206347   20697 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-217306"
	I0205 02:04:19.206348   20697 addons.go:69] Setting registry=true in profile "addons-217306"
	I0205 02:04:19.206360   20697 addons.go:238] Setting addon registry=true in "addons-217306"
	I0205 02:04:19.206369   20697 addons.go:69] Setting ingress=true in profile "addons-217306"
	I0205 02:04:19.206322   20697 addons.go:238] Setting addon inspektor-gadget=true in "addons-217306"
	I0205 02:04:19.206376   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.206380   20697 addons.go:238] Setting addon ingress=true in "addons-217306"
	I0205 02:04:19.206382   20697 addons.go:69] Setting metrics-server=true in profile "addons-217306"
	I0205 02:04:19.206395   20697 addons.go:238] Setting addon metrics-server=true in "addons-217306"
	I0205 02:04:19.206398   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.206412   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.206424   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.206535   20697 addons.go:69] Setting ingress-dns=true in profile "addons-217306"
	I0205 02:04:19.206551   20697 addons.go:238] Setting addon ingress-dns=true in "addons-217306"
	I0205 02:04:19.206552   20697 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-217306"
	I0205 02:04:19.206571   20697 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-217306"
	I0205 02:04:19.206581   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.206850   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.206868   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.206906   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.206909   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.206923   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.206343   20697 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-217306"
	I0205 02:04:19.206959   20697 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-217306"
	I0205 02:04:19.206979   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.207017   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.207093   20697 addons.go:69] Setting storage-provisioner=true in profile "addons-217306"
	I0205 02:04:19.207127   20697 addons.go:238] Setting addon storage-provisioner=true in "addons-217306"
	I0205 02:04:19.207165   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.207416   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.206360   20697 addons.go:69] Setting gcp-auth=true in profile "addons-217306"
	I0205 02:04:19.207550   20697 mustload.go:65] Loading cluster: addons-217306
	I0205 02:04:19.206328   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.207625   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.206351   20697 addons.go:69] Setting default-storageclass=true in profile "addons-217306"
	I0205 02:04:19.207747   20697 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-217306"
	I0205 02:04:19.208043   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.211124   20697 out.go:177] * Verifying Kubernetes components...
	I0205 02:04:19.212933   20697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 02:04:19.206337   20697 addons.go:69] Setting volumesnapshots=true in profile "addons-217306"
	I0205 02:04:19.213044   20697 addons.go:238] Setting addon volumesnapshots=true in "addons-217306"
	I0205 02:04:19.213077   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.213599   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.206269   20697 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-217306"
	I0205 02:04:19.217271   20697 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-217306"
	I0205 02:04:19.217310   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.217855   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.206852   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.206373   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.226247   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.230524   20697 config.go:182] Loaded profile config "addons-217306": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:04:19.206333   20697 addons.go:69] Setting cloud-spanner=true in profile "addons-217306"
	I0205 02:04:19.233283   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.247654   20697 addons.go:238] Setting addon cloud-spanner=true in "addons-217306"
	I0205 02:04:19.247726   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.247819   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.248288   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.256471   20697 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0205 02:04:19.257309   20697 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-217306"
	I0205 02:04:19.257343   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.257748   20697 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0205 02:04:19.257847   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.258891   20697 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0205 02:04:19.258910   20697 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0205 02:04:19.258970   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:19.261618   20697 out.go:177]   - Using image docker.io/registry:2.8.3
	I0205 02:04:19.266247   20697 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0205 02:04:19.266267   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0205 02:04:19.266319   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:19.268461   20697 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0205 02:04:19.268509   20697 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0205 02:04:19.268531   20697 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0205 02:04:19.270077   20697 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0205 02:04:19.270107   20697 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0205 02:04:19.270164   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:19.270248   20697 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0205 02:04:19.270258   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0205 02:04:19.270296   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:19.270514   20697 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0205 02:04:19.270528   20697 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0205 02:04:19.270586   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:19.278887   20697 addons.go:238] Setting addon default-storageclass=true in "addons-217306"
	I0205 02:04:19.278948   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.279421   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:19.279578   20697 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	W0205 02:04:19.279843   20697 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0205 02:04:19.282057   20697 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0205 02:04:19.283636   20697 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0205 02:04:19.285197   20697 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0205 02:04:19.286536   20697 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0205 02:04:19.288142   20697 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0205 02:04:19.288249   20697 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0205 02:04:19.289383   20697 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 02:04:19.290506   20697 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0205 02:04:19.290648   20697 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 02:04:19.290661   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0205 02:04:19.290715   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:19.290877   20697 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0205 02:04:19.292233   20697 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0205 02:04:19.292253   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0205 02:04:19.292309   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:19.293525   20697 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0205 02:04:19.294681   20697 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0205 02:04:19.295807   20697 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0205 02:04:19.295825   20697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0205 02:04:19.295875   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:19.322318   20697 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0205 02:04:19.322368   20697 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0205 02:04:19.323687   20697 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0205 02:04:19.323706   20697 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0205 02:04:19.323762   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:19.327163   20697 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0205 02:04:19.327350   20697 out.go:177]   - Using image docker.io/busybox:stable
	I0205 02:04:19.327415   20697 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0205 02:04:19.328259   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:19.328620   20697 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0205 02:04:19.328638   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0205 02:04:19.328692   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:19.329025   20697 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0205 02:04:19.329041   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0205 02:04:19.329085   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:19.329306   20697 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0205 02:04:19.329315   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0205 02:04:19.329353   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:19.332684   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:19.336947   20697 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0205 02:04:19.338399   20697 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0205 02:04:19.338430   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0205 02:04:19.338474   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:19.352748   20697 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0205 02:04:19.352782   20697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0205 02:04:19.352842   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:19.356772   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:19.359779   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:19.364691   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:19.365917   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:19.365962   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:19.366488   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:19.373639   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:19.390994   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:19.393074   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:19.395134   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:19.395392   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:19.396043   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:19.400717   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	W0205 02:04:19.429239   20697 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0205 02:04:19.429279   20697 retry.go:31] will retry after 372.714185ms: ssh: handshake failed: EOF
	I0205 02:04:19.450683   20697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0205 02:04:19.450801   20697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 02:04:19.653798   20697 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0205 02:04:19.653827   20697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0205 02:04:19.727470   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0205 02:04:19.730468   20697 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0205 02:04:19.730539   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0205 02:04:19.746698   20697 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0205 02:04:19.746789   20697 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0205 02:04:19.827388   20697 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0205 02:04:19.827415   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0205 02:04:19.837524   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0205 02:04:19.847747   20697 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0205 02:04:19.847848   20697 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0205 02:04:19.848443   20697 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0205 02:04:19.848489   20697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0205 02:04:19.850193   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 02:04:19.926165   20697 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0205 02:04:19.926209   20697 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0205 02:04:20.026333   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0205 02:04:20.027083   20697 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0205 02:04:20.027109   20697 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0205 02:04:20.030875   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0205 02:04:20.034113   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0205 02:04:20.039218   20697 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0205 02:04:20.039294   20697 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0205 02:04:20.048332   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0205 02:04:20.136639   20697 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0205 02:04:20.136738   20697 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0205 02:04:20.144843   20697 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0205 02:04:20.144945   20697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0205 02:04:20.235956   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0205 02:04:20.331073   20697 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0205 02:04:20.331162   20697 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0205 02:04:20.345021   20697 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0205 02:04:20.345046   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0205 02:04:20.428996   20697 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0205 02:04:20.429038   20697 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0205 02:04:20.539664   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0205 02:04:20.542091   20697 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0205 02:04:20.542155   20697 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0205 02:04:20.642405   20697 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0205 02:04:20.642432   20697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0205 02:04:20.730906   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0205 02:04:20.830543   20697 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0205 02:04:20.830570   20697 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0205 02:04:20.846202   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0205 02:04:21.033805   20697 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0205 02:04:21.033895   20697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0205 02:04:21.046707   20697 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0205 02:04:21.046787   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0205 02:04:21.347307   20697 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0205 02:04:21.347337   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0205 02:04:21.429094   20697 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0205 02:04:21.429123   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0205 02:04:21.827296   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0205 02:04:21.842183   20697 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.391357227s)
	I0205 02:04:21.843194   20697 node_ready.go:35] waiting up to 6m0s for node "addons-217306" to be "Ready" ...
	I0205 02:04:21.843468   20697 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.392752737s)
	I0205 02:04:21.843493   20697 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0205 02:04:21.941944   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0205 02:04:22.044363   20697 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0205 02:04:22.044454   20697 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0205 02:04:22.540998   20697 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0205 02:04:22.541061   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0205 02:04:22.646133   20697 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-217306" context rescaled to 1 replicas
	I0205 02:04:22.834804   20697 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0205 02:04:22.834886   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0205 02:04:23.027923   20697 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0205 02:04:23.028017   20697 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0205 02:04:23.143549   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0205 02:04:23.146205   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.418636523s)
	I0205 02:04:23.847328   20697 node_ready.go:53] node "addons-217306" has status "Ready":"False"
	I0205 02:04:23.927403   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.089770791s)
	I0205 02:04:23.927787   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.077552795s)
	I0205 02:04:23.938413   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.912039855s)
	I0205 02:04:23.938535   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.907635375s)
	I0205 02:04:23.938591   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.90444019s)
	I0205 02:04:23.938642   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.890229912s)
	I0205 02:04:25.047936   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.811882987s)
	I0205 02:04:25.047973   20697 addons.go:479] Verifying addon ingress=true in "addons-217306"
	I0205 02:04:25.048003   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.50824028s)
	I0205 02:04:25.048022   20697 addons.go:479] Verifying addon metrics-server=true in "addons-217306"
	I0205 02:04:25.048097   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.201870451s)
	I0205 02:04:25.048071   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.317138909s)
	I0205 02:04:25.048308   20697 addons.go:479] Verifying addon registry=true in "addons-217306"
	I0205 02:04:25.048358   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.220982787s)
	I0205 02:04:25.049638   20697 out.go:177] * Verifying registry addon...
	I0205 02:04:25.049679   20697 out.go:177] * Verifying ingress addon...
	I0205 02:04:25.049716   20697 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-217306 service yakd-dashboard -n yakd-dashboard
	
	I0205 02:04:25.052077   20697 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0205 02:04:25.053024   20697 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0205 02:04:25.054436   20697 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0205 02:04:25.054458   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:25.055272   20697 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0205 02:04:25.055292   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:25.555802   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:25.556139   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:25.946456   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.004455428s)
	W0205 02:04:25.946540   20697 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0205 02:04:25.946574   20697 retry.go:31] will retry after 160.272316ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0205 02:04:26.054632   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:26.055425   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:26.107673   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0205 02:04:26.335048   20697 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0205 02:04:26.335118   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:26.346157   20697 node_ready.go:53] node "addons-217306" has status "Ready":"False"
	I0205 02:04:26.361544   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:26.549017   20697 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0205 02:04:26.556868   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:26.557308   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:26.637723   20697 addons.go:238] Setting addon gcp-auth=true in "addons-217306"
	I0205 02:04:26.637796   20697 host.go:66] Checking if "addons-217306" exists ...
	I0205 02:04:26.638210   20697 cli_runner.go:164] Run: docker container inspect addons-217306 --format={{.State.Status}}
	I0205 02:04:26.651937   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.508313915s)
	I0205 02:04:26.651968   20697 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-217306"
	I0205 02:04:26.654783   20697 out.go:177] * Verifying csi-hostpath-driver addon...
	I0205 02:04:26.657089   20697 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0205 02:04:26.664314   20697 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0205 02:04:26.664338   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:26.664505   20697 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0205 02:04:26.664556   20697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-217306
	I0205 02:04:26.680901   20697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/addons-217306/id_rsa Username:docker}
	I0205 02:04:27.055418   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:27.055908   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:27.160225   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:27.555763   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:27.555877   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:27.660159   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:28.054646   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:28.055783   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:28.160261   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:28.555649   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:28.555792   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:28.660153   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:28.845766   20697 node_ready.go:53] node "addons-217306" has status "Ready":"False"
	I0205 02:04:29.001408   20697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.893685865s)
	I0205 02:04:29.001494   20697 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.336962333s)
	I0205 02:04:29.003365   20697 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0205 02:04:29.004881   20697 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0205 02:04:29.006267   20697 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0205 02:04:29.006284   20697 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0205 02:04:29.022773   20697 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0205 02:04:29.022796   20697 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0205 02:04:29.039780   20697 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0205 02:04:29.039819   20697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0205 02:04:29.055735   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:29.055806   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:29.056727   20697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0205 02:04:29.160729   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:29.462149   20697 addons.go:479] Verifying addon gcp-auth=true in "addons-217306"
	I0205 02:04:29.463567   20697 out.go:177] * Verifying gcp-auth addon...
	I0205 02:04:29.465469   20697 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0205 02:04:29.467697   20697 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0205 02:04:29.467718   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:29.555195   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:29.555334   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:29.659530   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:29.968248   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:30.054755   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:30.055612   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:30.159917   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:30.469084   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:30.554612   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:30.555367   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:30.659687   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:30.846228   20697 node_ready.go:53] node "addons-217306" has status "Ready":"False"
	I0205 02:04:30.968545   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:31.055176   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:31.055630   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:31.160100   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:31.467936   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:31.555466   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:31.555599   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:31.659858   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:31.968892   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:32.070011   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:32.070123   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:32.160509   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:32.468554   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:32.555015   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:32.555876   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:32.660280   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:32.968163   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:33.054405   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:33.055547   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:33.160364   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:33.345861   20697 node_ready.go:53] node "addons-217306" has status "Ready":"False"
	I0205 02:04:33.468503   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:33.554989   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:33.555831   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:33.660198   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:33.968036   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:34.055639   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:34.055733   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:34.159981   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:34.469179   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:34.554426   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:34.555242   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:34.659978   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:34.968789   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:35.055264   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:35.055347   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:35.159988   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:35.346494   20697 node_ready.go:53] node "addons-217306" has status "Ready":"False"
	I0205 02:04:35.468700   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:35.555344   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:35.556173   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:35.659541   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:35.968345   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:36.054689   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:36.055524   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:36.160000   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:36.468561   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:36.554970   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:36.555724   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:36.660446   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:36.968280   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:37.054710   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:37.055472   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:37.159749   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:37.468649   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:37.555545   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:37.555889   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:37.660275   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:37.845627   20697 node_ready.go:53] node "addons-217306" has status "Ready":"False"
	I0205 02:04:37.967904   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:38.058298   20697 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0205 02:04:38.058327   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:38.058342   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:38.159707   20697 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0205 02:04:38.159765   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:38.346664   20697 node_ready.go:49] node "addons-217306" has status "Ready":"True"
	I0205 02:04:38.346685   20697 node_ready.go:38] duration metric: took 16.503454428s for node "addons-217306" to be "Ready" ...
	I0205 02:04:38.346697   20697 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 02:04:38.352068   20697 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-576th" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:38.467964   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:38.555459   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:38.555554   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:38.660415   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:39.028596   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:39.128784   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:39.129052   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:39.229845   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:39.468828   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:39.555786   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:39.555916   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:39.660539   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:39.968545   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:40.055483   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:40.056149   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:40.161112   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:40.357480   20697 pod_ready.go:103] pod "amd-gpu-device-plugin-576th" in "kube-system" namespace has status "Ready":"False"
	I0205 02:04:40.468111   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:40.556615   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:40.556774   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:40.660529   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:40.968631   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:41.055070   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:41.055927   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:41.160313   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:41.469073   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:41.555639   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:41.555695   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:41.660109   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:42.031991   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:42.127217   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:42.129326   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:42.227656   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:42.433164   20697 pod_ready.go:103] pod "amd-gpu-device-plugin-576th" in "kube-system" namespace has status "Ready":"False"
	I0205 02:04:42.530510   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:42.555377   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:42.556099   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:42.661035   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:43.028782   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:43.055842   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:43.056118   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:43.160432   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:43.468665   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:43.555178   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:43.556077   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:43.660823   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:43.968490   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:44.055438   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:44.055708   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:44.160768   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:44.467760   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:44.555445   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:44.555631   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:44.660143   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:44.857451   20697 pod_ready.go:103] pod "amd-gpu-device-plugin-576th" in "kube-system" namespace has status "Ready":"False"
	I0205 02:04:44.969646   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:45.055149   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:45.056003   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:45.161888   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:45.469271   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:45.554877   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:45.555879   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:45.660648   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:45.969234   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:46.069869   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:46.069973   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:46.173190   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:46.468677   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:46.555537   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:46.556024   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:46.660813   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:46.968770   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:47.059490   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:47.059682   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:47.160144   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:47.356676   20697 pod_ready.go:103] pod "amd-gpu-device-plugin-576th" in "kube-system" namespace has status "Ready":"False"
	I0205 02:04:47.468644   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:47.555403   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:47.556324   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:47.659950   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:47.968987   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:48.055512   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:48.055617   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:48.160131   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:48.468976   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:48.555398   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:48.555595   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:48.660255   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:48.968171   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:49.055541   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:49.055696   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:49.160525   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:49.357433   20697 pod_ready.go:103] pod "amd-gpu-device-plugin-576th" in "kube-system" namespace has status "Ready":"False"
	I0205 02:04:49.469626   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:49.555311   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:49.555832   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:49.660606   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:49.969188   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:50.054622   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:50.055456   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:50.160297   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:50.469353   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:50.555282   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:50.555986   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:50.660651   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:50.968776   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:51.055616   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:51.055738   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:51.160572   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:51.469051   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:51.555965   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:51.556076   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:51.661235   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:51.856870   20697 pod_ready.go:103] pod "amd-gpu-device-plugin-576th" in "kube-system" namespace has status "Ready":"False"
	I0205 02:04:51.968596   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:52.055486   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:52.056272   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:52.159818   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:52.468715   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:52.555373   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:52.556135   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:52.661104   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:52.968048   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:53.055810   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:53.055828   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:53.160769   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:53.468323   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:53.554895   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:53.555714   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:53.660234   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:53.856835   20697 pod_ready.go:93] pod "amd-gpu-device-plugin-576th" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:53.856854   20697 pod_ready.go:82] duration metric: took 15.504752326s for pod "amd-gpu-device-plugin-576th" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:53.856865   20697 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-4scds" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:53.860919   20697 pod_ready.go:93] pod "coredns-668d6bf9bc-4scds" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:53.860941   20697 pod_ready.go:82] duration metric: took 4.069668ms for pod "coredns-668d6bf9bc-4scds" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:53.860967   20697 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-217306" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:53.864726   20697 pod_ready.go:93] pod "etcd-addons-217306" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:53.864747   20697 pod_ready.go:82] duration metric: took 3.771962ms for pod "etcd-addons-217306" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:53.864762   20697 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-217306" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:53.868449   20697 pod_ready.go:93] pod "kube-apiserver-addons-217306" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:53.868474   20697 pod_ready.go:82] duration metric: took 3.704798ms for pod "kube-apiserver-addons-217306" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:53.868485   20697 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-217306" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:53.871948   20697 pod_ready.go:93] pod "kube-controller-manager-addons-217306" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:53.871966   20697 pod_ready.go:82] duration metric: took 3.473802ms for pod "kube-controller-manager-addons-217306" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:53.871980   20697 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8djtv" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:53.968797   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:54.056032   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:54.058021   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:54.160828   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:54.255056   20697 pod_ready.go:93] pod "kube-proxy-8djtv" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:54.255077   20697 pod_ready.go:82] duration metric: took 383.090455ms for pod "kube-proxy-8djtv" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:54.255086   20697 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-217306" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:54.469049   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:54.556047   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:54.556344   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:54.655295   20697 pod_ready.go:93] pod "kube-scheduler-addons-217306" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:54.655319   20697 pod_ready.go:82] duration metric: took 400.225545ms for pod "kube-scheduler-addons-217306" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:54.655331   20697 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-9jb6h" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:54.659572   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:54.968612   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:55.055167   20697 pod_ready.go:93] pod "metrics-server-7fbb699795-9jb6h" in "kube-system" namespace has status "Ready":"True"
	I0205 02:04:55.055193   20697 pod_ready.go:82] duration metric: took 399.853459ms for pod "metrics-server-7fbb699795-9jb6h" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:55.055205   20697 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-pkrw7" in "kube-system" namespace to be "Ready" ...
	I0205 02:04:55.069955   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:55.069974   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:55.160743   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:55.469403   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:55.555126   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:55.555677   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:55.660889   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:55.968914   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:56.069406   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:56.069458   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:56.172861   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:56.469292   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:56.555063   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:56.555672   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:56.659823   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:56.968857   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:57.056210   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:57.056221   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:57.059783   20697 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pkrw7" in "kube-system" namespace has status "Ready":"False"
	I0205 02:04:57.160822   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:57.527017   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:57.556038   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:57.556074   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:57.660824   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:57.968560   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:58.055201   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:58.056187   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:58.160002   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:58.468383   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:58.555060   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:58.558862   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:58.660661   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:58.968383   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:59.055451   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:59.055985   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:59.160747   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:59.468858   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:04:59.555442   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:04:59.555496   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:04:59.560896   20697 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pkrw7" in "kube-system" namespace has status "Ready":"False"
	I0205 02:04:59.660512   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:04:59.967893   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:00.055474   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:00.055506   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:00.160470   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:00.469741   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:00.555696   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:00.555698   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:00.661009   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:00.968922   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:01.056181   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:01.056287   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:01.160963   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:01.468770   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:01.555424   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:01.555650   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:01.660492   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:01.967844   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:02.055529   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:02.055551   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:02.058993   20697 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pkrw7" in "kube-system" namespace has status "Ready":"False"
	I0205 02:05:02.160818   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:02.468573   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:02.555454   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:02.556295   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:02.660859   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:02.968391   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:03.054951   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:03.055988   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:03.160305   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:03.468866   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:03.555661   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:03.555703   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:03.660885   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:03.968641   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:04.055172   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:04.056001   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:04.059279   20697 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pkrw7" in "kube-system" namespace has status "Ready":"False"
	I0205 02:05:04.160066   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:04.469047   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:04.569645   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:04.569687   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:04.660439   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:04.969384   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:05.055389   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:05.055732   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:05.161247   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:05.476363   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:05.555108   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:05.555851   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:05.661298   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:05.968937   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:06.059802   20697 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pkrw7" in "kube-system" namespace has status "Ready":"False"
	I0205 02:05:06.069891   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:06.069904   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:06.160495   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:06.467869   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:06.556170   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:06.556260   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:06.660490   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:06.967879   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:07.055809   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:07.055836   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:07.160004   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:07.468529   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:07.554991   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:07.555905   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:07.660750   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:07.968189   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:08.054870   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:08.055833   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:08.160929   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:08.529415   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:08.637504   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:08.637540   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:08.644525   20697 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pkrw7" in "kube-system" namespace has status "Ready":"False"
	I0205 02:05:08.741075   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:09.028393   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:09.129908   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:09.130572   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:09.233810   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:09.529045   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:09.634110   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:09.634304   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:09.736997   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:10.027250   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:10.127382   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:10.128375   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:10.160923   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:10.468957   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:10.556180   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:10.556292   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:10.660821   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:10.968955   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:11.056104   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:11.056172   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:11.059705   20697 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pkrw7" in "kube-system" namespace has status "Ready":"False"
	I0205 02:05:11.160666   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:11.468713   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:11.555544   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:11.556421   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:11.661222   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:11.968722   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:12.055413   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:12.055579   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:12.160280   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:12.469112   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:12.555837   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:12.555917   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:12.659793   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:12.968532   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:13.055662   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:13.055688   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:13.160558   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:13.468494   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:13.555147   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:13.555826   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:13.559238   20697 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pkrw7" in "kube-system" namespace has status "Ready":"False"
	I0205 02:05:13.660037   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:13.968811   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:14.057431   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:14.057478   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:14.160287   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:14.468849   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:14.555972   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:14.556162   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:14.660396   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:14.968288   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:15.055306   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:15.055889   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:15.160436   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:15.468417   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:15.555027   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:15.555924   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:15.559415   20697 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-pkrw7" in "kube-system" namespace has status "Ready":"False"
	I0205 02:05:15.660353   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:15.969078   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:16.055776   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:16.055835   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:16.160679   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:16.468240   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:16.554967   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:16.555807   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:16.660957   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:16.968728   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:17.055453   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:17.056403   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:17.160126   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:17.468339   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:17.555294   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:17.556127   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:17.661126   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:17.971079   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:18.055865   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:18.055987   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:18.059068   20697 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-pkrw7" in "kube-system" namespace has status "Ready":"True"
	I0205 02:05:18.059097   20697 pod_ready.go:82] duration metric: took 23.003883606s for pod "nvidia-device-plugin-daemonset-pkrw7" in "kube-system" namespace to be "Ready" ...
	I0205 02:05:18.059120   20697 pod_ready.go:39] duration metric: took 39.71238033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0205 02:05:18.059145   20697 api_server.go:52] waiting for apiserver process to appear ...
	I0205 02:05:18.059201   20697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 02:05:18.072050   20697 api_server.go:72] duration metric: took 58.865911814s to wait for apiserver process to appear ...
	I0205 02:05:18.072071   20697 api_server.go:88] waiting for apiserver healthz status ...
	I0205 02:05:18.072089   20697 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0205 02:05:18.076642   20697 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0205 02:05:18.077496   20697 api_server.go:141] control plane version: v1.32.1
	I0205 02:05:18.077522   20697 api_server.go:131] duration metric: took 5.444043ms to wait for apiserver health ...
	I0205 02:05:18.077532   20697 system_pods.go:43] waiting for kube-system pods to appear ...
	I0205 02:05:18.080894   20697 system_pods.go:59] 19 kube-system pods found
	I0205 02:05:18.080921   20697 system_pods.go:61] "amd-gpu-device-plugin-576th" [8933c1a8-42a0-45c1-ae49-7f09ed352541] Running
	I0205 02:05:18.080926   20697 system_pods.go:61] "coredns-668d6bf9bc-4scds" [73401313-8e95-4f06-bcf7-92f82529da8d] Running
	I0205 02:05:18.080930   20697 system_pods.go:61] "csi-hostpath-attacher-0" [c4ace39a-e26c-464f-967d-06b131425881] Running
	I0205 02:05:18.080933   20697 system_pods.go:61] "csi-hostpath-resizer-0" [c7e62681-ad80-45fb-903d-44126f43b0ed] Running
	I0205 02:05:18.080943   20697 system_pods.go:61] "csi-hostpathplugin-ffg8d" [3859739f-8736-4929-8bc8-b7d0e3132c43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0205 02:05:18.080951   20697 system_pods.go:61] "etcd-addons-217306" [39f16eba-9d1a-441a-8927-3393744954c8] Running
	I0205 02:05:18.080959   20697 system_pods.go:61] "kindnet-tnlbk" [30245f7a-17ee-4f7a-a259-d376ab2d03c8] Running
	I0205 02:05:18.080964   20697 system_pods.go:61] "kube-apiserver-addons-217306" [e009df88-4b07-40a0-ab3e-ac3650349c13] Running
	I0205 02:05:18.080972   20697 system_pods.go:61] "kube-controller-manager-addons-217306" [581f1cc5-3bf1-4815-ae8c-f5eb1b34eae5] Running
	I0205 02:05:18.080979   20697 system_pods.go:61] "kube-ingress-dns-minikube" [3d022b30-4081-45ce-8e8c-445c5eee4741] Running
	I0205 02:05:18.080983   20697 system_pods.go:61] "kube-proxy-8djtv" [fc5e044e-b868-4864-ba1f-7d2ad2d3fc4c] Running
	I0205 02:05:18.080988   20697 system_pods.go:61] "kube-scheduler-addons-217306" [be1b4214-8150-4afa-b79d-64eeee1a4347] Running
	I0205 02:05:18.080993   20697 system_pods.go:61] "metrics-server-7fbb699795-9jb6h" [8743c040-e408-4ca1-8b1b-ecd90bb14894] Running
	I0205 02:05:18.081000   20697 system_pods.go:61] "nvidia-device-plugin-daemonset-pkrw7" [4d3cffa4-42a6-427e-9dd1-335e4dc0455f] Running
	I0205 02:05:18.081003   20697 system_pods.go:61] "registry-6c88467877-fvmx4" [fd7a8710-7eef-4adb-80ed-b30907f7c30f] Running
	I0205 02:05:18.081008   20697 system_pods.go:61] "registry-proxy-nns4j" [ed804182-6e80-4df2-a1d1-e7cf7eb658ec] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0205 02:05:18.081017   20697 system_pods.go:61] "snapshot-controller-68b874b76f-s9wwc" [e477b50e-5700-41b7-86e3-ef8bac497bce] Running
	I0205 02:05:18.081022   20697 system_pods.go:61] "snapshot-controller-68b874b76f-tjxlk" [0828c4b7-1d5e-46e2-af1b-65b1f2f73cd8] Running
	I0205 02:05:18.081025   20697 system_pods.go:61] "storage-provisioner" [64fab702-27df-4286-91f1-c4ceb9738495] Running
	I0205 02:05:18.081030   20697 system_pods.go:74] duration metric: took 3.492391ms to wait for pod list to return data ...
	I0205 02:05:18.081036   20697 default_sa.go:34] waiting for default service account to be created ...
	I0205 02:05:18.083310   20697 default_sa.go:45] found service account: "default"
	I0205 02:05:18.083328   20697 default_sa.go:55] duration metric: took 2.28657ms for default service account to be created ...
	I0205 02:05:18.083335   20697 system_pods.go:116] waiting for k8s-apps to be running ...
	I0205 02:05:18.086350   20697 system_pods.go:86] 19 kube-system pods found
	I0205 02:05:18.086392   20697 system_pods.go:89] "amd-gpu-device-plugin-576th" [8933c1a8-42a0-45c1-ae49-7f09ed352541] Running
	I0205 02:05:18.086401   20697 system_pods.go:89] "coredns-668d6bf9bc-4scds" [73401313-8e95-4f06-bcf7-92f82529da8d] Running
	I0205 02:05:18.086407   20697 system_pods.go:89] "csi-hostpath-attacher-0" [c4ace39a-e26c-464f-967d-06b131425881] Running
	I0205 02:05:18.086413   20697 system_pods.go:89] "csi-hostpath-resizer-0" [c7e62681-ad80-45fb-903d-44126f43b0ed] Running
	I0205 02:05:18.086422   20697 system_pods.go:89] "csi-hostpathplugin-ffg8d" [3859739f-8736-4929-8bc8-b7d0e3132c43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0205 02:05:18.086429   20697 system_pods.go:89] "etcd-addons-217306" [39f16eba-9d1a-441a-8927-3393744954c8] Running
	I0205 02:05:18.086436   20697 system_pods.go:89] "kindnet-tnlbk" [30245f7a-17ee-4f7a-a259-d376ab2d03c8] Running
	I0205 02:05:18.086442   20697 system_pods.go:89] "kube-apiserver-addons-217306" [e009df88-4b07-40a0-ab3e-ac3650349c13] Running
	I0205 02:05:18.086456   20697 system_pods.go:89] "kube-controller-manager-addons-217306" [581f1cc5-3bf1-4815-ae8c-f5eb1b34eae5] Running
	I0205 02:05:18.086462   20697 system_pods.go:89] "kube-ingress-dns-minikube" [3d022b30-4081-45ce-8e8c-445c5eee4741] Running
	I0205 02:05:18.086467   20697 system_pods.go:89] "kube-proxy-8djtv" [fc5e044e-b868-4864-ba1f-7d2ad2d3fc4c] Running
	I0205 02:05:18.086477   20697 system_pods.go:89] "kube-scheduler-addons-217306" [be1b4214-8150-4afa-b79d-64eeee1a4347] Running
	I0205 02:05:18.086480   20697 system_pods.go:89] "metrics-server-7fbb699795-9jb6h" [8743c040-e408-4ca1-8b1b-ecd90bb14894] Running
	I0205 02:05:18.086487   20697 system_pods.go:89] "nvidia-device-plugin-daemonset-pkrw7" [4d3cffa4-42a6-427e-9dd1-335e4dc0455f] Running
	I0205 02:05:18.086491   20697 system_pods.go:89] "registry-6c88467877-fvmx4" [fd7a8710-7eef-4adb-80ed-b30907f7c30f] Running
	I0205 02:05:18.086499   20697 system_pods.go:89] "registry-proxy-nns4j" [ed804182-6e80-4df2-a1d1-e7cf7eb658ec] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0205 02:05:18.086504   20697 system_pods.go:89] "snapshot-controller-68b874b76f-s9wwc" [e477b50e-5700-41b7-86e3-ef8bac497bce] Running
	I0205 02:05:18.086515   20697 system_pods.go:89] "snapshot-controller-68b874b76f-tjxlk" [0828c4b7-1d5e-46e2-af1b-65b1f2f73cd8] Running
	I0205 02:05:18.086520   20697 system_pods.go:89] "storage-provisioner" [64fab702-27df-4286-91f1-c4ceb9738495] Running
	I0205 02:05:18.086528   20697 system_pods.go:126] duration metric: took 3.187153ms to wait for k8s-apps to be running ...
	I0205 02:05:18.086540   20697 system_svc.go:44] waiting for kubelet service to be running ....
	I0205 02:05:18.086588   20697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 02:05:18.097955   20697 system_svc.go:56] duration metric: took 11.409163ms WaitForService to wait for kubelet
	I0205 02:05:18.097983   20697 kubeadm.go:582] duration metric: took 58.891847435s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 02:05:18.098008   20697 node_conditions.go:102] verifying NodePressure condition ...
	I0205 02:05:18.100652   20697 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0205 02:05:18.100684   20697 node_conditions.go:123] node cpu capacity is 8
	I0205 02:05:18.100700   20697 node_conditions.go:105] duration metric: took 2.686029ms to run NodePressure ...
	I0205 02:05:18.100716   20697 start.go:241] waiting for startup goroutines ...
	I0205 02:05:18.160571   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:18.468289   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:18.554775   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:18.555541   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:18.660046   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:18.968803   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:19.055751   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:19.055768   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:19.160580   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:19.468462   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:19.555351   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0205 02:05:19.555958   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:19.661118   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:20.029721   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:20.127001   20697 kapi.go:107] duration metric: took 55.074913321s to wait for kubernetes.io/minikube-addons=registry ...
	I0205 02:05:20.127108   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:20.227023   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:20.468866   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:20.556651   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:20.660645   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:20.968484   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:21.056389   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:21.160459   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:21.468193   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:21.556405   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:21.660132   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:21.968747   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:22.056710   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:22.160279   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:22.469315   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:22.556127   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:22.660816   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:23.028039   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:23.056921   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:23.160760   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:23.468764   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:23.569424   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:23.661925   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:23.968504   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:24.056259   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:24.160727   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:24.468831   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:24.556482   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:24.660205   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:24.969134   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:25.070150   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:25.171098   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:25.529081   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0205 02:05:25.648534   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:25.747592   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:26.029413   20697 kapi.go:107] duration metric: took 56.563939854s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0205 02:05:26.030942   20697 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-217306 cluster.
	I0205 02:05:26.032433   20697 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0205 02:05:26.034363   20697 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0205 02:05:26.056395   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:26.229590   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:26.627275   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:26.729284   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:27.056112   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:27.161003   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:27.556189   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:27.661066   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:28.055979   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:28.160705   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:28.557024   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:28.660420   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:29.056423   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:29.160148   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:29.556201   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:29.661445   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:30.056439   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:30.160439   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:30.556660   20697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0205 02:05:30.660693   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:31.057028   20697 kapi.go:107] duration metric: took 1m6.004010132s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0205 02:05:31.161367   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:31.660999   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:32.159891   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:32.660662   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:33.161193   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:33.691800   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:34.160389   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:34.661170   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:35.163400   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:35.660906   20697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0205 02:05:36.160249   20697 kapi.go:107] duration metric: took 1m9.503162494s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0205 02:05:36.161820   20697 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, inspektor-gadget, cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0205 02:05:36.162982   20697 addons.go:514] duration metric: took 1m16.956816553s for enable addons: enabled=[ingress-dns storage-provisioner inspektor-gadget cloud-spanner amd-gpu-device-plugin nvidia-device-plugin storage-provisioner-rancher metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0205 02:05:36.163024   20697 start.go:246] waiting for cluster config update ...
	I0205 02:05:36.163042   20697 start.go:255] writing updated cluster config ...
	I0205 02:05:36.163923   20697 ssh_runner.go:195] Run: rm -f paused
	I0205 02:05:36.211996   20697 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0205 02:05:36.213481   20697 out.go:177] * Done! kubectl is now configured to use "addons-217306" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 05 02:07:14 addons-217306 crio[1040]: time="2025-02-05 02:07:14.368507911Z" level=info msg="Removing pod sandbox: c7151ac4d37947512cbd12962721bfce20181e21acc0e13afe02209d0bd59a3e" id=2559040e-e146-4ac5-98f4-e17e042f1d3d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 05 02:07:14 addons-217306 crio[1040]: time="2025-02-05 02:07:14.374111690Z" level=info msg="Removed pod sandbox: c7151ac4d37947512cbd12962721bfce20181e21acc0e13afe02209d0bd59a3e" id=2559040e-e146-4ac5-98f4-e17e042f1d3d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 05 02:07:14 addons-217306 crio[1040]: time="2025-02-05 02:07:14.374389166Z" level=info msg="Stopping pod sandbox: d8e53809c36d971d7c8922143afd6267e8cbb007911b51be5d74851bffd99a9d" id=5521a1a6-74f0-4029-af38-b52c6d894bab name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 05 02:07:14 addons-217306 crio[1040]: time="2025-02-05 02:07:14.374412299Z" level=info msg="Stopped pod sandbox (already stopped): d8e53809c36d971d7c8922143afd6267e8cbb007911b51be5d74851bffd99a9d" id=5521a1a6-74f0-4029-af38-b52c6d894bab name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 05 02:07:14 addons-217306 crio[1040]: time="2025-02-05 02:07:14.374650284Z" level=info msg="Removing pod sandbox: d8e53809c36d971d7c8922143afd6267e8cbb007911b51be5d74851bffd99a9d" id=6a5c0507-1611-4a4a-8b01-336524c23bc4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 05 02:07:14 addons-217306 crio[1040]: time="2025-02-05 02:07:14.380025310Z" level=info msg="Removed pod sandbox: d8e53809c36d971d7c8922143afd6267e8cbb007911b51be5d74851bffd99a9d" id=6a5c0507-1611-4a4a-8b01-336524c23bc4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 05 02:07:14 addons-217306 crio[1040]: time="2025-02-05 02:07:14.380288992Z" level=info msg="Stopping pod sandbox: b0777d218a6ae0774719c4687d33eb53e464a7d9528011cf61dcb2f2d601c640" id=1109fc77-1a24-4fc6-a494-69ac78f40f6f name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 05 02:07:14 addons-217306 crio[1040]: time="2025-02-05 02:07:14.380310596Z" level=info msg="Stopped pod sandbox (already stopped): b0777d218a6ae0774719c4687d33eb53e464a7d9528011cf61dcb2f2d601c640" id=1109fc77-1a24-4fc6-a494-69ac78f40f6f name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 05 02:07:14 addons-217306 crio[1040]: time="2025-02-05 02:07:14.380525938Z" level=info msg="Removing pod sandbox: b0777d218a6ae0774719c4687d33eb53e464a7d9528011cf61dcb2f2d601c640" id=73a36767-f4d9-4557-bd01-b424d333ecc8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 05 02:07:14 addons-217306 crio[1040]: time="2025-02-05 02:07:14.386680100Z" level=info msg="Removed pod sandbox: b0777d218a6ae0774719c4687d33eb53e464a7d9528011cf61dcb2f2d601c640" id=73a36767-f4d9-4557-bd01-b424d333ecc8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 05 02:07:14 addons-217306 crio[1040]: time="2025-02-05 02:07:14.387011741Z" level=info msg="Stopping pod sandbox: 06c2fd777de642169c43bda2309c144d58bc85b783c1ca65ee0df82a3cf68f87" id=a71e51ba-6338-4011-a87f-fc25cfcdca8a name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 05 02:07:14 addons-217306 crio[1040]: time="2025-02-05 02:07:14.387045199Z" level=info msg="Stopped pod sandbox (already stopped): 06c2fd777de642169c43bda2309c144d58bc85b783c1ca65ee0df82a3cf68f87" id=a71e51ba-6338-4011-a87f-fc25cfcdca8a name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 05 02:07:14 addons-217306 crio[1040]: time="2025-02-05 02:07:14.387313047Z" level=info msg="Removing pod sandbox: 06c2fd777de642169c43bda2309c144d58bc85b783c1ca65ee0df82a3cf68f87" id=b48ccdf3-8698-4450-9d66-84ff72c1dc70 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 05 02:07:14 addons-217306 crio[1040]: time="2025-02-05 02:07:14.392677409Z" level=info msg="Removed pod sandbox: 06c2fd777de642169c43bda2309c144d58bc85b783c1ca65ee0df82a3cf68f87" id=b48ccdf3-8698-4450-9d66-84ff72c1dc70 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 05 02:08:30 addons-217306 crio[1040]: time="2025-02-05 02:08:30.106101798Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-qfsjj/POD" id=f5c883b9-f39e-4bb7-860f-16bc731c95a7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 05 02:08:30 addons-217306 crio[1040]: time="2025-02-05 02:08:30.106172425Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 05 02:08:30 addons-217306 crio[1040]: time="2025-02-05 02:08:30.124212297Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-qfsjj Namespace:default ID:cb7587d3f389935dfa33e31a84a0b3776873247675776bea35e26294682c8e88 UID:84c52310-8870-48c1-a898-ec58eb4d4768 NetNS:/var/run/netns/828107e0-5c9c-4bbe-917b-ab72b611cddb Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 05 02:08:30 addons-217306 crio[1040]: time="2025-02-05 02:08:30.124243489Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-qfsjj to CNI network \"kindnet\" (type=ptp)"
	Feb 05 02:08:30 addons-217306 crio[1040]: time="2025-02-05 02:08:30.133503156Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-qfsjj Namespace:default ID:cb7587d3f389935dfa33e31a84a0b3776873247675776bea35e26294682c8e88 UID:84c52310-8870-48c1-a898-ec58eb4d4768 NetNS:/var/run/netns/828107e0-5c9c-4bbe-917b-ab72b611cddb Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 05 02:08:30 addons-217306 crio[1040]: time="2025-02-05 02:08:30.133675714Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-qfsjj for CNI network kindnet (type=ptp)"
	Feb 05 02:08:30 addons-217306 crio[1040]: time="2025-02-05 02:08:30.138289723Z" level=info msg="Ran pod sandbox cb7587d3f389935dfa33e31a84a0b3776873247675776bea35e26294682c8e88 with infra container: default/hello-world-app-7d9564db4-qfsjj/POD" id=f5c883b9-f39e-4bb7-860f-16bc731c95a7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 05 02:08:30 addons-217306 crio[1040]: time="2025-02-05 02:08:30.139211645Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d3974562-88e2-4a44-bd42-1ba15d56ae44 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:08:30 addons-217306 crio[1040]: time="2025-02-05 02:08:30.139425199Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=d3974562-88e2-4a44-bd42-1ba15d56ae44 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:08:30 addons-217306 crio[1040]: time="2025-02-05 02:08:30.139814312Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=860de3f8-31e2-4f57-a54a-1438195cf939 name=/runtime.v1.ImageService/PullImage
	Feb 05 02:08:30 addons-217306 crio[1040]: time="2025-02-05 02:08:30.168854183Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1a032bb150716       docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901                              2 minutes ago       Running             nginx                     0                   474120e91d322       nginx
	f6eefb30cf10d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   caa98886403d8       busybox
	7818c6e969110       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   2c77f2c9dac21       ingress-nginx-controller-56d7c84fd4-tgjtb
	22b686bd4edf3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   31f7226ba6297       ingress-nginx-admission-patch-chg82
	a20ce6e64e8c6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   5f4aab00b8307       ingress-nginx-admission-create-6gt58
	f3267ebdd8e96       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             3 minutes ago       Running             minikube-ingress-dns      0                   c9e624a519eea       kube-ingress-dns-minikube
	22bd210a9c2f3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             3 minutes ago       Running             coredns                   0                   160e87bbdc661       coredns-668d6bf9bc-4scds
	bac2f2dffa06d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   f6356e96e1ee5       storage-provisioner
	fcf4984718f6a       docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26                           4 minutes ago       Running             kindnet-cni               0                   e7f58667ad092       kindnet-tnlbk
	85fd5a665da9e       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             4 minutes ago       Running             kube-proxy                0                   690e3e96ef0f3       kube-proxy-8djtv
	45749f1765f43       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             4 minutes ago       Running             kube-controller-manager   0                   fe226a80665e4       kube-controller-manager-addons-217306
	4bc1cf3d00411       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   e38857c2a5bd8       etcd-addons-217306
	15d400727074f       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             4 minutes ago       Running             kube-scheduler            0                   81edcc811048b       kube-scheduler-addons-217306
	beebc6f638cbc       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             4 minutes ago       Running             kube-apiserver            0                   e5bb7d1b5313a       kube-apiserver-addons-217306
	
	
	==> coredns [22bd210a9c2f30c5b291970ea91c6e5a5071ae11455cb7534ccf1a278b81aa8b] <==
	[INFO] 10.244.0.18:50946 - 28464 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000137032s
	[INFO] 10.244.0.18:44076 - 54180 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004001415s
	[INFO] 10.244.0.18:44076 - 54476 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004396249s
	[INFO] 10.244.0.18:51040 - 2516 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004494642s
	[INFO] 10.244.0.18:51040 - 2275 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00484764s
	[INFO] 10.244.0.18:50122 - 41941 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004782473s
	[INFO] 10.244.0.18:50122 - 42259 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005197982s
	[INFO] 10.244.0.18:52581 - 41017 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010348s
	[INFO] 10.244.0.18:52581 - 41241 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000152552s
	[INFO] 10.244.0.21:34985 - 17507 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000212223s
	[INFO] 10.244.0.21:58058 - 54443 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00028877s
	[INFO] 10.244.0.21:42769 - 5557 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136559s
	[INFO] 10.244.0.21:35885 - 48427 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00010783s
	[INFO] 10.244.0.21:39955 - 27374 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000103882s
	[INFO] 10.244.0.21:52134 - 38320 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000155427s
	[INFO] 10.244.0.21:57097 - 2133 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.004913662s
	[INFO] 10.244.0.21:44146 - 40634 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005108658s
	[INFO] 10.244.0.21:35784 - 65377 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005033064s
	[INFO] 10.244.0.21:41684 - 56070 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006157484s
	[INFO] 10.244.0.21:46673 - 44487 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005316606s
	[INFO] 10.244.0.21:52119 - 12662 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005323948s
	[INFO] 10.244.0.21:42702 - 4363 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000636522s
	[INFO] 10.244.0.21:49982 - 27158 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000764494s
	[INFO] 10.244.0.26:50895 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000244447s
	[INFO] 10.244.0.26:45625 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000153902s
	
	
	==> describe nodes <==
	Name:               addons-217306
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-217306
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d
	                    minikube.k8s.io/name=addons-217306
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_05T02_04_15_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-217306
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 05 Feb 2025 02:04:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-217306
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 05 Feb 2025 02:08:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 05 Feb 2025 02:06:47 +0000   Wed, 05 Feb 2025 02:04:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 05 Feb 2025 02:06:47 +0000   Wed, 05 Feb 2025 02:04:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 05 Feb 2025 02:06:47 +0000   Wed, 05 Feb 2025 02:04:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 05 Feb 2025 02:06:47 +0000   Wed, 05 Feb 2025 02:04:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-217306
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	System Info:
	  Machine ID:                 309d461591784016a5bd9fccb6743615
	  System UUID:                c3bfa995-98a4-450c-917d-527393b6e668
	  Boot ID:                    966de046-a0b5-476b-8b8b-9607817e1121
	  Kernel Version:             5.15.0-1075-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  default                     hello-world-app-7d9564db4-qfsjj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-tgjtb    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m7s
	  kube-system                 coredns-668d6bf9bc-4scds                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m12s
	  kube-system                 etcd-addons-217306                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m17s
	  kube-system                 kindnet-tnlbk                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m12s
	  kube-system                 kube-apiserver-addons-217306                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-addons-217306        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-proxy-8djtv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-scheduler-addons-217306                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m6s                   kube-proxy       
	  Normal   Starting                 4m22s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m22s (x8 over 4m22s)  kubelet          Node addons-217306 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m22s (x8 over 4m22s)  kubelet          Node addons-217306 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m22s (x8 over 4m22s)  kubelet          Node addons-217306 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m17s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m17s                  kubelet          Node addons-217306 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m17s                  kubelet          Node addons-217306 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m17s                  kubelet          Node addons-217306 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m13s                  node-controller  Node addons-217306 event: Registered Node addons-217306 in Controller
	  Normal   NodeReady                3m54s                  kubelet          Node addons-217306 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000853] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000699] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000710] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000685] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000650] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000724] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000674] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000740] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.623739] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023322] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.423487] kauditd_printk_skb: 46 callbacks suppressed
	[Feb 5 02:06] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[  +1.024110] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[  +2.015838] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[  +4.195567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[  +8.187239] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[ +16.126554] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[Feb 5 02:07] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	
	
	==> etcd [4bc1cf3d004110d744d9135db8d715b3277bb4a663f98fdd7d70787b2e4cea8d] <==
	{"level":"info","ts":"2025-02-05T02:04:10.434312Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-05T02:04:10.434123Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T02:04:10.434921Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-05T02:04:10.435280Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-05T02:04:21.827730Z","caller":"traceutil/trace.go:171","msg":"trace[14763687] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"101.241397ms","start":"2025-02-05T02:04:21.726457Z","end":"2025-02-05T02:04:21.827698Z","steps":["trace[14763687] 'process raft request'  (duration: 10.714893ms)","trace[14763687] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/configmaps/kube-system/coredns; req_size:827; } (duration: 90.305614ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-05T02:04:21.828190Z","caller":"traceutil/trace.go:171","msg":"trace[1335528721] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"101.30868ms","start":"2025-02-05T02:04:21.726869Z","end":"2025-02-05T02:04:21.828178Z","steps":["trace[1335528721] 'process raft request'  (duration: 101.227037ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:04:22.028270Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.202373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4034"}
	{"level":"info","ts":"2025-02-05T02:04:22.028347Z","caller":"traceutil/trace.go:171","msg":"trace[1555290547] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:403; }","duration":"101.328688ms","start":"2025-02-05T02:04:21.927002Z","end":"2025-02-05T02:04:22.028331Z","steps":["trace[1555290547] 'range keys from bolt db'  (duration: 88.539724ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:04:22.526721Z","caller":"traceutil/trace.go:171","msg":"trace[553839746] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"199.322997ms","start":"2025-02-05T02:04:22.327378Z","end":"2025-02-05T02:04:22.526701Z","steps":["trace[553839746] 'process raft request'  (duration: 104.027573ms)","trace[553839746] 'compare'  (duration: 94.444944ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-05T02:04:22.526891Z","caller":"traceutil/trace.go:171","msg":"trace[1867947581] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"199.358129ms","start":"2025-02-05T02:04:22.327523Z","end":"2025-02-05T02:04:22.526881Z","steps":["trace[1867947581] 'process raft request'  (duration: 198.466504ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:04:22.527011Z","caller":"traceutil/trace.go:171","msg":"trace[1332300627] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"199.452198ms","start":"2025-02-05T02:04:22.327551Z","end":"2025-02-05T02:04:22.527003Z","steps":["trace[1332300627] 'process raft request'  (duration: 198.484987ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:04:22.527224Z","caller":"traceutil/trace.go:171","msg":"trace[1218124459] transaction","detail":"{read_only:false; response_revision:416; number_of_response:1; }","duration":"199.595959ms","start":"2025-02-05T02:04:22.327620Z","end":"2025-02-05T02:04:22.527216Z","steps":["trace[1218124459] 'process raft request'  (duration: 198.450745ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:04:22.527360Z","caller":"traceutil/trace.go:171","msg":"trace[2091235729] linearizableReadLoop","detail":"{readStateIndex:428; appliedIndex:424; }","duration":"181.917469ms","start":"2025-02-05T02:04:22.345430Z","end":"2025-02-05T02:04:22.527347Z","steps":["trace[2091235729] 'read index received'  (duration: 85.984865ms)","trace[2091235729] 'applied index is now lower than readState.Index'  (duration: 95.931914ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-05T02:04:22.527655Z","caller":"traceutil/trace.go:171","msg":"trace[768114226] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"181.842243ms","start":"2025-02-05T02:04:22.345765Z","end":"2025-02-05T02:04:22.527607Z","steps":["trace[768114226] 'process raft request'  (duration: 180.377781ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:04:22.527929Z","caller":"traceutil/trace.go:171","msg":"trace[735511046] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"181.61952ms","start":"2025-02-05T02:04:22.346299Z","end":"2025-02-05T02:04:22.527919Z","steps":["trace[735511046] 'process raft request'  (duration: 179.882884ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:04:22.528388Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.939682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-05T02:04:22.528463Z","caller":"traceutil/trace.go:171","msg":"trace[156029072] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:419; }","duration":"183.049922ms","start":"2025-02-05T02:04:22.345401Z","end":"2025-02-05T02:04:22.528451Z","steps":["trace[156029072] 'agreement among raft nodes before linearized reading'  (duration: 182.909781ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:04:22.529757Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.484558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-02-05T02:04:22.529857Z","caller":"traceutil/trace.go:171","msg":"trace[276734456] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:419; }","duration":"183.613108ms","start":"2025-02-05T02:04:22.346233Z","end":"2025-02-05T02:04:22.529846Z","steps":["trace[276734456] 'agreement among raft nodes before linearized reading'  (duration: 182.405045ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-05T02:04:22.530434Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.485143ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-217306\" limit:1 ","response":"range_response_count:1 size:5655"}
	{"level":"warn","ts":"2025-02-05T02:04:22.530661Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.815221ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4096"}
	{"level":"info","ts":"2025-02-05T02:04:22.641722Z","caller":"traceutil/trace.go:171","msg":"trace[1317894111] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:419; }","duration":"213.871254ms","start":"2025-02-05T02:04:22.427810Z","end":"2025-02-05T02:04:22.641682Z","steps":["trace[1317894111] 'agreement among raft nodes before linearized reading'  (duration: 102.812267ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:04:22.531149Z","caller":"traceutil/trace.go:171","msg":"trace[80307488] range","detail":"{range_begin:/registry/minions/addons-217306; range_end:; response_count:1; response_revision:419; }","duration":"102.896102ms","start":"2025-02-05T02:04:22.427923Z","end":"2025-02-05T02:04:22.530819Z","steps":["trace[80307488] 'agreement among raft nodes before linearized reading'  (duration: 102.459171ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:05:45.254946Z","caller":"traceutil/trace.go:171","msg":"trace[557193131] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1279; }","duration":"123.697654ms","start":"2025-02-05T02:05:45.131225Z","end":"2025-02-05T02:05:45.254923Z","steps":["trace[557193131] 'process raft request'  (duration: 123.505717ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-05T02:05:56.777194Z","caller":"traceutil/trace.go:171","msg":"trace[330519255] transaction","detail":"{read_only:false; response_revision:1374; number_of_response:1; }","duration":"103.307283ms","start":"2025-02-05T02:05:56.673861Z","end":"2025-02-05T02:05:56.777168Z","steps":["trace[330519255] 'process raft request'  (duration: 103.160303ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:08:31 up 50 min,  0 users,  load average: 0.37, 0.60, 0.29
	Linux addons-217306 5.15.0-1075-gcp #84~20.04.1-Ubuntu SMP Thu Jan 16 20:44:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [fcf4984718f6a31c915cdaef8d2cf0057fb9570c081029c9e965e8a9f514070a] <==
	I0205 02:06:27.727212       1 main.go:301] handling current node
	I0205 02:06:37.731552       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:06:37.731590       1 main.go:301] handling current node
	I0205 02:06:47.726425       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:06:47.726472       1 main.go:301] handling current node
	I0205 02:06:57.727441       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:06:57.727475       1 main.go:301] handling current node
	I0205 02:07:07.732660       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:07:07.732698       1 main.go:301] handling current node
	I0205 02:07:17.730438       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:07:17.730486       1 main.go:301] handling current node
	I0205 02:07:27.726737       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:07:27.726774       1 main.go:301] handling current node
	I0205 02:07:37.729666       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:07:37.729719       1 main.go:301] handling current node
	I0205 02:07:47.733632       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:07:47.733675       1 main.go:301] handling current node
	I0205 02:07:57.727291       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:07:57.727332       1 main.go:301] handling current node
	I0205 02:08:07.733624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:08:07.733656       1 main.go:301] handling current node
	I0205 02:08:17.727269       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:08:17.727301       1 main.go:301] handling current node
	I0205 02:08:27.727277       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:08:27.727313       1 main.go:301] handling current node
	
	
	==> kube-apiserver [beebc6f638cbcd792d6d761e7fb1e03d27d9c46124b2611fafbc8c3170d2b7e0] <==
	I0205 02:05:53.034288       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.93.222"}
	E0205 02:06:09.553910       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0205 02:06:09.559904       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0205 02:06:09.565973       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0205 02:06:09.630963       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0205 02:06:09.796540       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.251.222"}
	I0205 02:06:15.518992       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0205 02:06:16.638911       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0205 02:06:24.566215       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0205 02:06:25.984371       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0205 02:06:43.451350       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0205 02:06:55.807417       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0205 02:06:55.807467       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0205 02:06:55.821285       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0205 02:06:55.821333       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0205 02:06:55.822413       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0205 02:06:55.822519       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0205 02:06:55.831611       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0205 02:06:55.831719       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0205 02:06:55.846495       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0205 02:06:55.846534       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0205 02:06:56.822698       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0205 02:06:56.847159       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0205 02:06:56.952620       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0205 02:08:29.734692       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.231.205"}
	
	
	==> kube-controller-manager [45749f1765f43a572022162157f98648d24c646d71d23df6476a909d2408f0c2] <==
	E0205 02:07:34.472991       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0205 02:07:36.702223       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0205 02:07:36.703066       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0205 02:07:36.703756       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0205 02:07:36.703789       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0205 02:08:11.701052       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0205 02:08:11.701942       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0205 02:08:11.702741       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0205 02:08:11.702781       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0205 02:08:16.416166       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0205 02:08:16.416961       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0205 02:08:16.417899       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0205 02:08:16.417927       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0205 02:08:20.314604       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0205 02:08:20.315367       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0205 02:08:20.316178       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0205 02:08:20.316205       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0205 02:08:21.147500       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0205 02:08:21.148404       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0205 02:08:21.149174       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0205 02:08:21.149207       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0205 02:08:29.502796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="10.494723ms"
	I0205 02:08:29.511392       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="8.554477ms"
	I0205 02:08:29.511477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="47.785µs"
	I0205 02:08:29.514192       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="41.073µs"
	
	
	==> kube-proxy [85fd5a665da9efdfe5ccafda8b4e83362dd2bdc6b894de55c471dd82e18959ee] <==
	I0205 02:04:23.246211       1 server_linux.go:66] "Using iptables proxy"
	I0205 02:04:23.835119       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0205 02:04:23.835208       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0205 02:04:24.127377       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0205 02:04:24.127460       1 server_linux.go:170] "Using iptables Proxier"
	I0205 02:04:24.129964       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0205 02:04:24.137184       1 server.go:497] "Version info" version="v1.32.1"
	I0205 02:04:24.137265       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 02:04:24.139789       1 config.go:105] "Starting endpoint slice config controller"
	I0205 02:04:24.139873       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0205 02:04:24.140415       1 config.go:329] "Starting node config controller"
	I0205 02:04:24.140433       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0205 02:04:24.140562       1 config.go:199] "Starting service config controller"
	I0205 02:04:24.140607       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0205 02:04:24.240665       1 shared_informer.go:320] Caches are synced for node config
	I0205 02:04:24.240813       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0205 02:04:24.240911       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [15d400727074f4cc2bf970198dbc591d224130ce2c2c70bef3d89d076d03fa82] <==
	W0205 02:04:11.951242       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0205 02:04:11.951350       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:11.951302       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0205 02:04:11.951423       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:12.764856       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0205 02:04:12.764893       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:12.793627       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0205 02:04:12.793668       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:12.794567       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0205 02:04:12.794607       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:12.847918       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0205 02:04:12.847967       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:12.901285       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0205 02:04:12.901330       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:12.936745       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0205 02:04:12.936792       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:13.026888       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0205 02:04:13.026928       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:13.063538       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0205 02:04:13.063581       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:13.136987       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0205 02:04:13.137036       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0205 02:04:13.146409       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0205 02:04:13.146445       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0205 02:04:16.247438       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 05 02:08:14 addons-217306 kubelet[1634]: E0205 02:08:14.333264    1634 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/44bacaa531d696abb190631fc64abc1fe285ffbcac354cbbca19ca943ebadeb7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/44bacaa531d696abb190631fc64abc1fe285ffbcac354cbbca19ca943ebadeb7/diff: no such file or directory, extraDiskErr: <nil>
	Feb 05 02:08:14 addons-217306 kubelet[1634]: E0205 02:08:14.335422    1634 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/57266b1a5a66d6ca7dc2ea81b2ed6b37d4aadcd2a5821f680a70a256f227a8fe/diff" to get inode usage: stat /var/lib/containers/storage/overlay/57266b1a5a66d6ca7dc2ea81b2ed6b37d4aadcd2a5821f680a70a256f227a8fe/diff: no such file or directory, extraDiskErr: <nil>
	Feb 05 02:08:14 addons-217306 kubelet[1634]: E0205 02:08:14.335469    1634 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/56d06bfa40d954f2cf7641df0a9092dde4de698df20e516e2e43883073ac5e4b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/56d06bfa40d954f2cf7641df0a9092dde4de698df20e516e2e43883073ac5e4b/diff: no such file or directory, extraDiskErr: <nil>
	Feb 05 02:08:14 addons-217306 kubelet[1634]: E0205 02:08:14.336597    1634 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/37cb694172649b9c620d60b3b641d6d9f867f1f22f8695df0bf01b04034ed677/diff" to get inode usage: stat /var/lib/containers/storage/overlay/37cb694172649b9c620d60b3b641d6d9f867f1f22f8695df0bf01b04034ed677/diff: no such file or directory, extraDiskErr: <nil>
	Feb 05 02:08:14 addons-217306 kubelet[1634]: E0205 02:08:14.337734    1634 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d880f16bebc85a21ac6bdf8f0756d631702b324601dc5d87c5e9a31bcba00ee1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d880f16bebc85a21ac6bdf8f0756d631702b324601dc5d87c5e9a31bcba00ee1/diff: no such file or directory, extraDiskErr: <nil>
	Feb 05 02:08:14 addons-217306 kubelet[1634]: E0205 02:08:14.337739    1634 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/94ca14b7208834699856a84d98ca5d26df2d418c4ad3bfddef78343ae2dd83b4/diff" to get inode usage: stat /var/lib/containers/storage/overlay/94ca14b7208834699856a84d98ca5d26df2d418c4ad3bfddef78343ae2dd83b4/diff: no such file or directory, extraDiskErr: <nil>
	Feb 05 02:08:14 addons-217306 kubelet[1634]: E0205 02:08:14.338847    1634 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ab2bffd1fc5938e405b13ab233de242330026f0e1b42ee968171a783a4dba9ee/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ab2bffd1fc5938e405b13ab233de242330026f0e1b42ee968171a783a4dba9ee/diff: no such file or directory, extraDiskErr: <nil>
	Feb 05 02:08:14 addons-217306 kubelet[1634]: E0205 02:08:14.341102    1634 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c6f7550b0bb7cca5dde76e5796d48d23ba85a7aaed0492728589b6a28e766eb6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c6f7550b0bb7cca5dde76e5796d48d23ba85a7aaed0492728589b6a28e766eb6/diff: no such file or directory, extraDiskErr: <nil>
	Feb 05 02:08:14 addons-217306 kubelet[1634]: E0205 02:08:14.344450    1634 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c6f7550b0bb7cca5dde76e5796d48d23ba85a7aaed0492728589b6a28e766eb6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c6f7550b0bb7cca5dde76e5796d48d23ba85a7aaed0492728589b6a28e766eb6/diff: no such file or directory, extraDiskErr: <nil>
	Feb 05 02:08:20 addons-217306 kubelet[1634]: I0205 02:08:20.177518    1634 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Feb 05 02:08:24 addons-217306 kubelet[1634]: E0205 02:08:24.281436    1634 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721304281230686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617310,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:08:24 addons-217306 kubelet[1634]: E0205 02:08:24.281468    1634 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721304281230686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617310,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:08:29 addons-217306 kubelet[1634]: I0205 02:08:29.503865    1634 memory_manager.go:355] "RemoveStaleState removing state" podUID="e477b50e-5700-41b7-86e3-ef8bac497bce" containerName="volume-snapshot-controller"
	Feb 05 02:08:29 addons-217306 kubelet[1634]: I0205 02:08:29.503911    1634 memory_manager.go:355] "RemoveStaleState removing state" podUID="0828c4b7-1d5e-46e2-af1b-65b1f2f73cd8" containerName="volume-snapshot-controller"
	Feb 05 02:08:29 addons-217306 kubelet[1634]: I0205 02:08:29.503921    1634 memory_manager.go:355] "RemoveStaleState removing state" podUID="3859739f-8736-4929-8bc8-b7d0e3132c43" containerName="csi-snapshotter"
	Feb 05 02:08:29 addons-217306 kubelet[1634]: I0205 02:08:29.503930    1634 memory_manager.go:355] "RemoveStaleState removing state" podUID="3859739f-8736-4929-8bc8-b7d0e3132c43" containerName="node-driver-registrar"
	Feb 05 02:08:29 addons-217306 kubelet[1634]: I0205 02:08:29.503939    1634 memory_manager.go:355] "RemoveStaleState removing state" podUID="3859739f-8736-4929-8bc8-b7d0e3132c43" containerName="csi-provisioner"
	Feb 05 02:08:29 addons-217306 kubelet[1634]: I0205 02:08:29.503947    1634 memory_manager.go:355] "RemoveStaleState removing state" podUID="13d75a86-2183-4256-a0f2-f2409fd343dd" containerName="task-pv-container"
	Feb 05 02:08:29 addons-217306 kubelet[1634]: I0205 02:08:29.503956    1634 memory_manager.go:355] "RemoveStaleState removing state" podUID="3859739f-8736-4929-8bc8-b7d0e3132c43" containerName="liveness-probe"
	Feb 05 02:08:29 addons-217306 kubelet[1634]: I0205 02:08:29.503964    1634 memory_manager.go:355] "RemoveStaleState removing state" podUID="c4ace39a-e26c-464f-967d-06b131425881" containerName="csi-attacher"
	Feb 05 02:08:29 addons-217306 kubelet[1634]: I0205 02:08:29.503972    1634 memory_manager.go:355] "RemoveStaleState removing state" podUID="3859739f-8736-4929-8bc8-b7d0e3132c43" containerName="hostpath"
	Feb 05 02:08:29 addons-217306 kubelet[1634]: I0205 02:08:29.503979    1634 memory_manager.go:355] "RemoveStaleState removing state" podUID="c7e62681-ad80-45fb-903d-44126f43b0ed" containerName="csi-resizer"
	Feb 05 02:08:29 addons-217306 kubelet[1634]: I0205 02:08:29.503986    1634 memory_manager.go:355] "RemoveStaleState removing state" podUID="3859739f-8736-4929-8bc8-b7d0e3132c43" containerName="csi-external-health-monitor-controller"
	Feb 05 02:08:29 addons-217306 kubelet[1634]: I0205 02:08:29.725982    1634 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtdbj\" (UniqueName: \"kubernetes.io/projected/84c52310-8870-48c1-a898-ec58eb4d4768-kube-api-access-gtdbj\") pod \"hello-world-app-7d9564db4-qfsjj\" (UID: \"84c52310-8870-48c1-a898-ec58eb4d4768\") " pod="default/hello-world-app-7d9564db4-qfsjj"
	Feb 05 02:08:30 addons-217306 kubelet[1634]: W0205 02:08:30.135386    1634 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/fe98dd6bd6c16a7288117c1fff90ee41fdfbaf8a0d6539ec00798251897e53f1/crio-cb7587d3f389935dfa33e31a84a0b3776873247675776bea35e26294682c8e88 WatchSource:0}: Error finding container cb7587d3f389935dfa33e31a84a0b3776873247675776bea35e26294682c8e88: Status 404 returned error can't find the container with id cb7587d3f389935dfa33e31a84a0b3776873247675776bea35e26294682c8e88
	
	
	==> storage-provisioner [bac2f2dffa06d7856e908eafdd008da971af0405323bb5bc8ccf147bf5cbd6c9] <==
	I0205 02:04:39.126998       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0205 02:04:39.134888       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0205 02:04:39.134930       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0205 02:04:39.145104       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0205 02:04:39.145253       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-217306_87eb69ca-9896-425e-b1a2-2831fbc7ef3a!
	I0205 02:04:39.145253       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3cd682a5-5418-4952-bb70-c176ddfcb505", APIVersion:"v1", ResourceVersion:"933", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-217306_87eb69ca-9896-425e-b1a2-2831fbc7ef3a became leader
	I0205 02:04:39.245518       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-217306_87eb69ca-9896-425e-b1a2-2831fbc7ef3a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-217306 -n addons-217306
helpers_test.go:261: (dbg) Run:  kubectl --context addons-217306 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-qfsjj ingress-nginx-admission-create-6gt58 ingress-nginx-admission-patch-chg82
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-217306 describe pod hello-world-app-7d9564db4-qfsjj ingress-nginx-admission-create-6gt58 ingress-nginx-admission-patch-chg82
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-217306 describe pod hello-world-app-7d9564db4-qfsjj ingress-nginx-admission-create-6gt58 ingress-nginx-admission-patch-chg82: exit status 1 (62.027301ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-qfsjj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-217306/192.168.49.2
	Start Time:       Wed, 05 Feb 2025 02:08:29 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gtdbj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gtdbj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-qfsjj to addons-217306
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6gt58" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-chg82" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-217306 describe pod hello-world-app-7d9564db4-qfsjj ingress-nginx-admission-create-6gt58 ingress-nginx-admission-patch-chg82: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-217306 addons disable ingress-dns --alsologtostderr -v=1: (1.411569712s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-217306 addons disable ingress --alsologtostderr -v=1: (7.606855741s)
--- FAIL: TestAddons/parallel/Ingress (151.51s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (187.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [018a1912-12cc-4c2d-a6a8-16f510477d86] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00360326s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-150463 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-150463 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-150463 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-150463 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [deb6ce88-cff6-4e4a-8ced-26424587b7f8] Pending
helpers_test.go:344: "sp-pod" [deb6ce88-cff6-4e4a-8ced-26424587b7f8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0205 02:11:58.769773   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-150463 -n functional-150463
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-02-05 02:14:28.984571722 +0000 UTC m=+665.881864974
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-150463 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-150463 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-150463/192.168.49.2
Start Time:       Wed, 05 Feb 2025 02:11:28 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fm5r (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-7fm5r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  3m                default-scheduler  Successfully assigned default/sp-pod to functional-150463
Warning  Failed     93s               kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     93s               kubelet            Error: ErrImagePull
Normal   BackOff    93s               kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     93s               kubelet            Error: ImagePullBackOff
Normal   Pulling    79s (x2 over 3m)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-150463 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-150463 logs sp-pod -n default: exit status 1 (59.534737ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-150463 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-150463
helpers_test.go:235: (dbg) docker inspect functional-150463:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3",
	        "Created": "2025-02-05T02:09:38.244942193Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43246,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-05T02:09:38.357852235Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a",
	        "ResolvConfPath": "/var/lib/docker/containers/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3/hostname",
	        "HostsPath": "/var/lib/docker/containers/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3/hosts",
	        "LogPath": "/var/lib/docker/containers/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3-json.log",
	        "Name": "/functional-150463",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-150463:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-150463",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fee2a687b704529f4df42910f79cb67e46a4feca21ce975089681c8faf6fcdf6-init/diff:/var/lib/docker/overlay2/f186c7f5b5e3359a3aedb1825f83d9f64c1bd7ca8cd203398cd99d9b6a74d20a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fee2a687b704529f4df42910f79cb67e46a4feca21ce975089681c8faf6fcdf6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fee2a687b704529f4df42910f79cb67e46a4feca21ce975089681c8faf6fcdf6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fee2a687b704529f4df42910f79cb67e46a4feca21ce975089681c8faf6fcdf6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-150463",
	                "Source": "/var/lib/docker/volumes/functional-150463/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-150463",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-150463",
	                "name.minikube.sigs.k8s.io": "functional-150463",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d4d442a2b9547651fc8072acaa710780833e64d025702f78e300fc01fa417a2",
	            "SandboxKey": "/var/run/docker/netns/7d4d442a2b95",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-150463": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "fa6ceaab36f5a0e09832bac108f0625bd802864af516fc695b9c90ae04d836cf",
	                    "EndpointID": "d4f45c2bfc32b7a873b206653e19d0d3f5fcc672456687e8fa0e91c3a56ee2f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-150463",
	                        "1b91ea1b28c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-150463 -n functional-150463
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-150463 logs -n 25: (1.335302089s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-150463 ssh findmnt                                            | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:12 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| mount     | -p functional-150463                                                     | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:12 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdany-port2833295931/001:/mount-9p      |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-150463 ssh findmnt                                            | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:12 UTC | 05 Feb 25 02:12 UTC |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh       | functional-150463 ssh -- ls                                              | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:12 UTC | 05 Feb 25 02:12 UTC |
	|           | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh       | functional-150463 ssh cat                                                | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:12 UTC | 05 Feb 25 02:12 UTC |
	|           | /mount-9p/test-1738721565952644482                                       |                   |         |         |                     |                     |
	| ssh       | functional-150463 ssh stat                                               | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|           | /mount-9p/created-by-test                                                |                   |         |         |                     |                     |
	| ssh       | functional-150463 ssh stat                                               | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|           | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh       | functional-150463 ssh sudo                                               | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|           | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount     | -p functional-150463                                                     | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdspecific-port3276311860/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh       | functional-150463 ssh findmnt                                            | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh       | functional-150463 ssh findmnt                                            | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh       | functional-150463 ssh -- ls                                              | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|           | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh       | functional-150463 ssh sudo                                               | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|           | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| ssh       | functional-150463 ssh findmnt                                            | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|           | -T /mount1                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-150463                                                     | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup61532747/001:/mount2     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount     | -p functional-150463                                                     | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup61532747/001:/mount1     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount     | -p functional-150463                                                     | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup61532747/001:/mount3     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-150463 ssh findmnt                                            | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|           | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh       | functional-150463 ssh findmnt                                            | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|           | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh       | functional-150463 ssh findmnt                                            | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|           | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-150463                                                     | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|           | --kill=true                                                              |                   |         |         |                     |                     |
	| start     | -p functional-150463                                                     | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|           | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|           | --driver=docker                                                          |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| start     | -p functional-150463                                                     | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|           | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|           | --driver=docker                                                          |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| start     | -p functional-150463                                                     | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|           | --dry-run --alsologtostderr                                              |                   |         |         |                     |                     |
	|           | -v=1 --driver=docker                                                     |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                       | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|           | -p functional-150463                                                     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 02:14:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 02:14:20.118773   60365 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:14:20.118863   60365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:14:20.118871   60365 out.go:358] Setting ErrFile to fd 2...
	I0205 02:14:20.118875   60365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:14:20.119034   60365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
	I0205 02:14:20.119524   60365 out.go:352] Setting JSON to false
	I0205 02:14:20.120407   60365 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3406,"bootTime":1738718254,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:14:20.120505   60365 start.go:139] virtualization: kvm guest
	I0205 02:14:20.122588   60365 out.go:177] * [functional-150463] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 02:14:20.124199   60365 notify.go:220] Checking for updates...
	I0205 02:14:20.124230   60365 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 02:14:20.125783   60365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:14:20.127349   60365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	I0205 02:14:20.128928   60365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	I0205 02:14:20.130250   60365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 02:14:20.131560   60365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 02:14:20.133131   60365 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:14:20.133629   60365 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:14:20.155575   60365 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0205 02:14:20.155661   60365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:14:20.202080   60365 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-05 02:14:20.193450632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:14:20.202186   60365 docker.go:318] overlay module found
	I0205 02:14:20.203946   60365 out.go:177] * Using the docker driver based on existing profile
	I0205 02:14:20.205218   60365 start.go:297] selected driver: docker
	I0205 02:14:20.205232   60365 start.go:901] validating driver "docker" against &{Name:functional-150463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-150463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:14:20.205323   60365 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 02:14:20.205405   60365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:14:20.250857   60365 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-05 02:14:20.242372606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:14:20.251685   60365 cni.go:84] Creating CNI manager for ""
	I0205 02:14:20.251758   60365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0205 02:14:20.251816   60365 start.go:340] cluster config:
	{Name:functional-150463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-150463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:14:20.253860   60365 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Feb 05 02:14:13 functional-150463 crio[4958]: time="2025-02-05 02:14:13.520690524Z" level=info msg="Started container" PID=7957 containerID=cd9697378be8d05b510ab1fd69a0011bdf37016a50df76c381002bd0eb384467 description=default/busybox-mount/mount-munger id=97523d00-d101-4e93-8fc2-07ab4a2c66f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c98cbca67b5bb5ad922ebdc7da2da66d9de8fc740601f99d4b7a131069df21e5
	Feb 05 02:14:14 functional-150463 crio[4958]: time="2025-02-05 02:14:14.964863300Z" level=info msg="Stopping pod sandbox: c98cbca67b5bb5ad922ebdc7da2da66d9de8fc740601f99d4b7a131069df21e5" id=c123d5f5-2b57-422b-9339-d26d92f23a75 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 05 02:14:14 functional-150463 crio[4958]: time="2025-02-05 02:14:14.965113166Z" level=info msg="Got pod network &{Name:busybox-mount Namespace:default ID:c98cbca67b5bb5ad922ebdc7da2da66d9de8fc740601f99d4b7a131069df21e5 UID:deabf349-132c-4d15-91b9-39655fc5a5bc NetNS:/var/run/netns/e6aedcf0-d36d-4686-b9e1-3413cc000bf6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 05 02:14:14 functional-150463 crio[4958]: time="2025-02-05 02:14:14.965239056Z" level=info msg="Deleting pod default_busybox-mount from CNI network \"kindnet\" (type=ptp)"
	Feb 05 02:14:15 functional-150463 crio[4958]: time="2025-02-05 02:14:15.003146687Z" level=info msg="Stopped pod sandbox: c98cbca67b5bb5ad922ebdc7da2da66d9de8fc740601f99d4b7a131069df21e5" id=c123d5f5-2b57-422b-9339-d26d92f23a75 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.585716655Z" level=info msg="Running pod sandbox: kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-dkbll/POD" id=3200da13-aee3-4eac-a2c9-fe9976f3354d name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.585799926Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.600044319Z" level=info msg="Got pod network &{Name:kubernetes-dashboard-7779f9b69b-dkbll Namespace:kubernetes-dashboard ID:48ec35f36e81469698e58a179620e916857b59dd44ab5c6fbf32db5c7853782d UID:5ad938cf-5295-46e5-a62f-69b747c68755 NetNS:/var/run/netns/bed84001-9d1c-490d-a66e-b1c298fc1120 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.600077416Z" level=info msg="Adding pod kubernetes-dashboard_kubernetes-dashboard-7779f9b69b-dkbll to CNI network \"kindnet\" (type=ptp)"
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.608978527Z" level=info msg="Got pod network &{Name:kubernetes-dashboard-7779f9b69b-dkbll Namespace:kubernetes-dashboard ID:48ec35f36e81469698e58a179620e916857b59dd44ab5c6fbf32db5c7853782d UID:5ad938cf-5295-46e5-a62f-69b747c68755 NetNS:/var/run/netns/bed84001-9d1c-490d-a66e-b1c298fc1120 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.609103838Z" level=info msg="Checking pod kubernetes-dashboard_kubernetes-dashboard-7779f9b69b-dkbll for CNI network kindnet (type=ptp)"
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.611455073Z" level=info msg="Ran pod sandbox 48ec35f36e81469698e58a179620e916857b59dd44ab5c6fbf32db5c7853782d with infra container: kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-dkbll/POD" id=3200da13-aee3-4eac-a2c9-fe9976f3354d name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.612444165Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=340cbbb5-7154-453c-bedc-857607013a22 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.612717570Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=340cbbb5-7154-453c-bedc-857607013a22 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.640375487Z" level=info msg="Running pod sandbox: kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-mq6wm/POD" id=ec97c613-e956-496b-b9f9-d45589805593 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.640439915Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.654036338Z" level=info msg="Got pod network &{Name:dashboard-metrics-scraper-5d59dccf9b-mq6wm Namespace:kubernetes-dashboard ID:33454ed6ccab965be768da552c9969d4b95fb5197a49c80b90ba227d2e90398c UID:29a0122f-da9c-408d-940c-0f3444201a22 NetNS:/var/run/netns/5c685e2f-9fe5-400d-99c9-67229040f3e5 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.654069408Z" level=info msg="Adding pod kubernetes-dashboard_dashboard-metrics-scraper-5d59dccf9b-mq6wm to CNI network \"kindnet\" (type=ptp)"
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.662610299Z" level=info msg="Got pod network &{Name:dashboard-metrics-scraper-5d59dccf9b-mq6wm Namespace:kubernetes-dashboard ID:33454ed6ccab965be768da552c9969d4b95fb5197a49c80b90ba227d2e90398c UID:29a0122f-da9c-408d-940c-0f3444201a22 NetNS:/var/run/netns/5c685e2f-9fe5-400d-99c9-67229040f3e5 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.662765919Z" level=info msg="Checking pod kubernetes-dashboard_dashboard-metrics-scraper-5d59dccf9b-mq6wm for CNI network kindnet (type=ptp)"
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.665160062Z" level=info msg="Ran pod sandbox 33454ed6ccab965be768da552c9969d4b95fb5197a49c80b90ba227d2e90398c with infra container: kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-mq6wm/POD" id=ec97c613-e956-496b-b9f9-d45589805593 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.666173881Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=95c92bd4-410b-4660-8978-3a10139b852e name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:14:21 functional-150463 crio[4958]: time="2025-02-05 02:14:21.666384065Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=95c92bd4-410b-4660-8978-3a10139b852e name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:14:27 functional-150463 crio[4958]: time="2025-02-05 02:14:27.454116446Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=f0255f3d-791e-4989-b8da-78744170fdeb name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:14:27 functional-150463 crio[4958]: time="2025-02-05 02:14:27.454356008Z" level=info msg="Image docker.io/nginx:alpine not found" id=f0255f3d-791e-4989-b8da-78744170fdeb name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	cd9697378be8d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   16 seconds ago       Exited              mount-munger              0                   c98cbca67b5bb       busybox-mount
	64b2d0fdcd77b       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                      About a minute ago   Running             echoserver                0                   14078b01214e1       hello-node-fcfd88b6f-hctqr
	5133253b72618       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969    2 minutes ago        Running             echoserver                0                   209b08b607a1d       hello-node-connect-58f9cf68d8-9vzwq
	ec06c329b648e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      3 minutes ago        Running             coredns                   2                   dcccc61ea8905       coredns-668d6bf9bc-bqrbg
	87850cf31d3cd       d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56                                      3 minutes ago        Running             kindnet-cni               2                   08770c641feee       kindnet-mts45
	be2a92594c2a8       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                      3 minutes ago        Running             kube-proxy                2                   c75032f167cc3       kube-proxy-snh97
	c1cff4b786f8c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago        Running             storage-provisioner       3                   ceb7bd0829f76       storage-provisioner
	f8ecd865b69e2       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                      3 minutes ago        Running             kube-apiserver            0                   bab50022e3c1e       kube-apiserver-functional-150463
	22c589e31dbb6       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                      3 minutes ago        Running             etcd                      2                   3781e3fad15e8       etcd-functional-150463
	c3040259cf477       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                      3 minutes ago        Running             kube-scheduler            2                   d7b1f48a9c976       kube-scheduler-functional-150463
	84dda87c92b8d       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                      3 minutes ago        Running             kube-controller-manager   2                   ac70688bb0946       kube-controller-manager-functional-150463
	6e971732adbb9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago        Exited              storage-provisioner       2                   ceb7bd0829f76       storage-provisioner
	a379f4573d94b       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                      4 minutes ago        Exited              etcd                      1                   3781e3fad15e8       etcd-functional-150463
	26456add96d27       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                      4 minutes ago        Exited              kube-scheduler            1                   d7b1f48a9c976       kube-scheduler-functional-150463
	ffbcf8d962014       d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56                                      4 minutes ago        Exited              kindnet-cni               1                   08770c641feee       kindnet-mts45
	d193a52476c7a       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                      4 minutes ago        Exited              kube-proxy                1                   c75032f167cc3       kube-proxy-snh97
	feaa0863ed3f4       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                      4 minutes ago        Exited              kube-controller-manager   1                   ac70688bb0946       kube-controller-manager-functional-150463
	c31b8dfd67ffd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago        Exited              coredns                   1                   dcccc61ea8905       coredns-668d6bf9bc-bqrbg
	
	
	==> coredns [c31b8dfd67ffdb1d7910aa5c697998e5b5000f47ee51bfd7c5e6423cff94177a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37674 - 62233 "HINFO IN 4582310575312474049.8829241919095050275. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.102100674s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ec06c329b648e613776528bf7cc8f7b2e9121ce33bf0d096da43b714b4ee2bc6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58767 - 45933 "HINFO IN 3679247596174586508.4741551725947178915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041586583s
	
	
	==> describe nodes <==
	Name:               functional-150463
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-150463
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d
	                    minikube.k8s.io/name=functional-150463
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_05T02_09_53_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 05 Feb 2025 02:09:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-150463
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 05 Feb 2025 02:14:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 05 Feb 2025 02:14:29 +0000   Wed, 05 Feb 2025 02:09:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 05 Feb 2025 02:14:29 +0000   Wed, 05 Feb 2025 02:09:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 05 Feb 2025 02:14:29 +0000   Wed, 05 Feb 2025 02:09:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 05 Feb 2025 02:14:29 +0000   Wed, 05 Feb 2025 02:10:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-150463
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	System Info:
	  Machine ID:                 4fe7387702c74aacb28a7655029ab8e7
	  System UUID:                5e3b5c6c-5533-449e-b66c-c44897437511
	  Boot ID:                    966de046-a0b5-476b-8b8b-9607817e1121
	  Kernel Version:             5.15.0-1075-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-9vzwq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     hello-node-fcfd88b6f-hctqr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  default                     mysql-58ccfd96bb-t8j2q                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     3m9s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-668d6bf9bc-bqrbg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m33s
	  kube-system                 etcd-functional-150463                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m39s
	  kube-system                 kindnet-mts45                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m34s
	  kube-system                 kube-apiserver-functional-150463              250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 kube-controller-manager-functional-150463     200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-proxy-snh97                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-scheduler-functional-150463              100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-mq6wm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-dkbll         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m32s                  kube-proxy       
	  Normal   Starting                 3m32s                  kube-proxy       
	  Normal   Starting                 4m4s                   kube-proxy       
	  Warning  CgroupV1                 4m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m38s                  kubelet          Node functional-150463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m38s                  kubelet          Node functional-150463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m38s                  kubelet          Node functional-150463 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m38s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m34s                  node-controller  Node functional-150463 event: Registered Node functional-150463 in Controller
	  Normal   NodeReady                4m20s                  kubelet          Node functional-150463 status is now: NodeReady
	  Normal   RegisteredNode           4m2s                   node-controller  Node functional-150463 event: Registered Node functional-150463 in Controller
	  Normal   NodeHasSufficientMemory  3m38s (x8 over 3m38s)  kubelet          Node functional-150463 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 3m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 3m38s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    3m38s (x8 over 3m38s)  kubelet          Node functional-150463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m38s (x8 over 3m38s)  kubelet          Node functional-150463 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m32s                  node-controller  Node functional-150463 event: Registered Node functional-150463 in Controller
	
	
	==> dmesg <==
	[  +0.000740] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.623739] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023322] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.423487] kauditd_printk_skb: 46 callbacks suppressed
	[Feb 5 02:06] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[  +1.024110] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[  +2.015838] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[  +4.195567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[  +8.187239] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[ +16.126554] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[Feb 5 02:07] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[Feb 5 02:14] FS-Cache: Duplicate cookie detected
	[  +0.004758] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006757] FS-Cache: O-cookie d=0000000019113d10{9P.session} n=000000009f84c1f9
	[  +0.007541] FS-Cache: O-key=[10] '34323935373433323632'
	[  +0.005363] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006579] FS-Cache: N-cookie d=0000000019113d10{9P.session} n=0000000032d7f8bb
	[  +0.007540] FS-Cache: N-key=[10] '34323935373433323632'
	
	
	==> etcd [22c589e31dbb652d1d3b733f0d43c83c90b0bc7de9c859566aaca377f4c2d81c] <==
	{"level":"info","ts":"2025-02-05T02:10:53.427031Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2025-02-05T02:10:53.427057Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T02:10:53.427165Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-05T02:10:53.427202Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-05T02:10:53.429694Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-05T02:10:53.429766Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-05T02:10:53.429883Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-05T02:10:53.430078Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-05T02:10:53.430098Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-05T02:10:54.756008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-05T02:10:54.756057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-05T02:10:54.756082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-05T02:10:54.756094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2025-02-05T02:10:54.756102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-02-05T02:10:54.756111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2025-02-05T02:10:54.756128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-02-05T02:10:54.758466Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-150463 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-05T02:10:54.758488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T02:10:54.758508Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T02:10:54.758744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-05T02:10:54.758854Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-05T02:10:54.759488Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T02:10:54.759483Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T02:10:54.760111Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-05T02:10:54.760658Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [a379f4573d94bccfdfb3e007eb503ac4b00cf1a852ce829a13a69d0642fc1d55] <==
	{"level":"info","ts":"2025-02-05T02:10:24.537502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-05T02:10:24.537519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-02-05T02:10:24.537530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2025-02-05T02:10:24.537535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-05T02:10:24.537575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2025-02-05T02:10:24.537587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-05T02:10:24.538602Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-150463 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-05T02:10:24.538608Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T02:10:24.538635Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T02:10:24.538858Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-05T02:10:24.538886Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-05T02:10:24.539325Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T02:10:24.539389Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T02:10:24.540161Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-05T02:10:24.540708Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-05T02:10:43.243055Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-05T02:10:43.243126Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-150463","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2025-02-05T02:10:43.243221Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T02:10:43.243322Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T02:10:43.261457Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T02:10:43.261519Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-05T02:10:43.261648Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-02-05T02:10:43.264186Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-05T02:10:43.264274Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-05T02:10:43.264284Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-150463","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 02:14:30 up 56 min,  0 users,  load average: 0.36, 0.58, 0.42
	Linux functional-150463 5.15.0-1075-gcp #84~20.04.1-Ubuntu SMP Thu Jan 16 20:44:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [87850cf31d3cd009084cd8319a827ea4beaa7ccdc066b26cd6320b7b960bce90] <==
	I0205 02:12:27.626502       1 main.go:301] handling current node
	I0205 02:12:37.627179       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:12:37.627221       1 main.go:301] handling current node
	I0205 02:12:47.627152       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:12:47.627183       1 main.go:301] handling current node
	I0205 02:12:57.626403       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:12:57.626451       1 main.go:301] handling current node
	I0205 02:13:07.627215       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:13:07.627258       1 main.go:301] handling current node
	I0205 02:13:17.626906       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:13:17.626979       1 main.go:301] handling current node
	I0205 02:13:27.626883       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:13:27.626920       1 main.go:301] handling current node
	I0205 02:13:37.626984       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:13:37.627021       1 main.go:301] handling current node
	I0205 02:13:47.626330       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:13:47.626365       1 main.go:301] handling current node
	I0205 02:13:57.626637       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:13:57.626703       1 main.go:301] handling current node
	I0205 02:14:07.626733       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:14:07.626793       1 main.go:301] handling current node
	I0205 02:14:17.627038       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:14:17.627074       1 main.go:301] handling current node
	I0205 02:14:27.627335       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:14:27.627367       1 main.go:301] handling current node
	
	
	==> kindnet [ffbcf8d9620143677f05408ccd300266cda4ab13a2e6d86ac97d5fc2d7f3a011] <==
	I0205 02:10:22.930423       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0205 02:10:22.930811       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0205 02:10:22.931010       1 main.go:148] setting mtu 1500 for CNI 
	I0205 02:10:22.931061       1 main.go:178] kindnetd IP family: "ipv4"
	I0205 02:10:22.931096       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0205 02:10:23.346661       1 controller.go:361] Starting controller kube-network-policies
	I0205 02:10:23.346686       1 controller.go:365] Waiting for informer caches to sync
	I0205 02:10:23.346694       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0205 02:10:25.547480       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0205 02:10:25.547579       1 metrics.go:61] Registering metrics
	I0205 02:10:25.547660       1 controller.go:401] Syncing nftables rules
	I0205 02:10:33.347038       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:10:33.347115       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f8ecd865b69e20cfc31151c3d422fda55078a8a7ef5c99bff25276e120fa52cf] <==
	I0205 02:10:55.825934       1 aggregator.go:171] initial CRD sync complete...
	I0205 02:10:55.826135       1 autoregister_controller.go:144] Starting autoregister controller
	I0205 02:10:55.826457       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0205 02:10:55.826472       1 cache.go:39] Caches are synced for autoregister controller
	I0205 02:10:55.826146       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0205 02:10:55.826194       1 shared_informer.go:320] Caches are synced for configmaps
	I0205 02:10:55.831134       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0205 02:10:55.833212       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0205 02:10:56.534041       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0205 02:10:56.656900       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0205 02:10:57.436952       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0205 02:10:57.553485       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0205 02:10:57.628723       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0205 02:10:57.635749       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0205 02:10:59.043730       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0205 02:10:59.294542       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0205 02:10:59.343377       1 controller.go:615] quota admission added evaluator for: endpoints
	I0205 02:11:16.363654       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.240.33"}
	I0205 02:11:21.827754       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.84.19"}
	I0205 02:11:23.568223       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.201.232"}
	I0205 02:11:28.032378       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.5.178"}
	I0205 02:12:33.518106       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.28.220"}
	I0205 02:14:21.209882       1 controller.go:615] quota admission added evaluator for: namespaces
	I0205 02:14:21.358759       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.122.212"}
	I0205 02:14:21.372564       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.153.68"}
	
	
	==> kube-controller-manager [84dda87c92b8db5a5a9e0edf8ab3f1db9587a639c54a95ef1fe3befd8c43c2da] <==
	I0205 02:12:33.467483       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-fcfd88b6f" duration="47.746µs"
	I0205 02:12:34.771908       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-fcfd88b6f" duration="5.739505ms"
	I0205 02:12:34.771996       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-fcfd88b6f" duration="51.063µs"
	I0205 02:12:57.689382       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-150463"
	I0205 02:13:54.464184       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="83.143µs"
	I0205 02:14:07.462003       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="58.429µs"
	I0205 02:14:21.256838       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="13.30957ms"
	E0205 02:14:21.256888       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0205 02:14:21.261951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="3.944848ms"
	E0205 02:14:21.261986       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0205 02:14:21.263609       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="9.547931ms"
	E0205 02:14:21.263658       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0205 02:14:21.268734       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="5.668938ms"
	E0205 02:14:21.268789       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0205 02:14:21.270394       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="5.563099ms"
	E0205 02:14:21.270425       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0205 02:14:21.284431       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="12.385474ms"
	I0205 02:14:21.330965       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="46.484778ms"
	I0205 02:14:21.331087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="68.602µs"
	I0205 02:14:21.339338       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="13.553687ms"
	I0205 02:14:21.340598       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="43.094µs"
	I0205 02:14:21.345906       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="6.434598ms"
	I0205 02:14:21.345985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="43.014µs"
	I0205 02:14:21.349228       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="47.045µs"
	I0205 02:14:29.370249       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-150463"
	
	
	==> kube-controller-manager [feaa0863ed3f4168288a693fef261fd0ecf288fd7172be51d8f2c3ad6e332d08] <==
	I0205 02:10:28.592572       1 shared_informer.go:320] Caches are synced for TTL
	I0205 02:10:28.592606       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0205 02:10:28.592619       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0205 02:10:28.592634       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0205 02:10:28.592666       1 shared_informer.go:320] Caches are synced for crt configmap
	I0205 02:10:28.592749       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0205 02:10:28.593690       1 shared_informer.go:320] Caches are synced for cronjob
	I0205 02:10:28.594932       1 shared_informer.go:320] Caches are synced for taint
	I0205 02:10:28.595065       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0205 02:10:28.595175       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-150463"
	I0205 02:10:28.595216       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0205 02:10:28.596278       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0205 02:10:28.596367       1 shared_informer.go:320] Caches are synced for resource quota
	I0205 02:10:28.597473       1 shared_informer.go:320] Caches are synced for resource quota
	I0205 02:10:28.598190       1 shared_informer.go:320] Caches are synced for job
	I0205 02:10:28.602277       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0205 02:10:28.693051       1 shared_informer.go:320] Caches are synced for garbage collector
	I0205 02:10:28.693081       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0205 02:10:28.693092       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0205 02:10:28.702680       1 shared_informer.go:320] Caches are synced for garbage collector
	I0205 02:10:28.900801       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="357.892708ms"
	I0205 02:10:28.900922       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="81.331µs"
	I0205 02:10:32.586088       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-150463"
	I0205 02:10:34.265238       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="6.428661ms"
	I0205 02:10:34.265365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="84.796µs"
	
	
	==> kube-proxy [be2a92594c2a8a993ad881f90f94860172be7c624bbec024891da0fc1219537d] <==
	I0205 02:10:57.062476       1 server_linux.go:66] "Using iptables proxy"
	I0205 02:10:57.191106       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0205 02:10:57.191171       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0205 02:10:57.231342       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0205 02:10:57.231409       1 server_linux.go:170] "Using iptables Proxier"
	I0205 02:10:57.233279       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0205 02:10:57.233851       1 server.go:497] "Version info" version="v1.32.1"
	I0205 02:10:57.234072       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 02:10:57.235491       1 config.go:105] "Starting endpoint slice config controller"
	I0205 02:10:57.235541       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0205 02:10:57.235550       1 config.go:329] "Starting node config controller"
	I0205 02:10:57.235561       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0205 02:10:57.235592       1 config.go:199] "Starting service config controller"
	I0205 02:10:57.235603       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0205 02:10:57.335889       1 shared_informer.go:320] Caches are synced for node config
	I0205 02:10:57.335905       1 shared_informer.go:320] Caches are synced for service config
	I0205 02:10:57.335917       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d193a52476c7a254dbe7985bdf70ca3543cda9e32b4975f68b559b2a8d8ff4e7] <==
	I0205 02:10:22.740789       1 server_linux.go:66] "Using iptables proxy"
	E0205 02:10:22.954424       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-150463\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I0205 02:10:25.547762       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0205 02:10:25.547842       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0205 02:10:25.836453       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0205 02:10:25.836517       1 server_linux.go:170] "Using iptables Proxier"
	I0205 02:10:25.839083       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0205 02:10:25.839513       1 server.go:497] "Version info" version="v1.32.1"
	I0205 02:10:25.839553       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 02:10:25.841722       1 config.go:105] "Starting endpoint slice config controller"
	I0205 02:10:25.841810       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0205 02:10:25.841748       1 config.go:199] "Starting service config controller"
	I0205 02:10:25.841882       1 config.go:329] "Starting node config controller"
	I0205 02:10:25.841908       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0205 02:10:25.841976       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0205 02:10:25.942295       1 shared_informer.go:320] Caches are synced for service config
	I0205 02:10:25.942378       1 shared_informer.go:320] Caches are synced for node config
	I0205 02:10:25.942490       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [26456add96d27fe5e7ad5798e8f6f9829760d41136771bc410e4aff992926674] <==
	I0205 02:10:23.802961       1 serving.go:386] Generated self-signed cert in-memory
	I0205 02:10:25.651919       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0205 02:10:25.651958       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 02:10:25.735531       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0205 02:10:25.735663       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0205 02:10:25.735722       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0205 02:10:25.735811       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0205 02:10:25.735682       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0205 02:10:25.736014       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 02:10:25.735869       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0205 02:10:25.743075       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0205 02:10:25.736858       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0205 02:10:25.836040       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0205 02:10:25.843281       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0205 02:10:43.243242       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0205 02:10:43.243412       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0205 02:10:43.243549       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 02:10:43.243581       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0205 02:10:43.243755       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E0205 02:10:43.244649       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c3040259cf4773d34a84e579849b3c463837da1db975da812808f02d65cba28e] <==
	I0205 02:10:53.847027       1 serving.go:386] Generated self-signed cert in-memory
	W0205 02:10:55.682316       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0205 02:10:55.682366       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0205 02:10:55.682379       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0205 02:10:55.682391       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0205 02:10:55.743517       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0205 02:10:55.743544       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 02:10:55.745501       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 02:10:55.745566       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0205 02:10:55.745680       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0205 02:10:55.745723       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0205 02:10:55.845940       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 05 02:14:02 functional-150463 kubelet[5325]: E0205 02:14:02.678289    5325 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721642678111811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:188261,},InodesUsed:&UInt64Value{Value:93,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:14:02 functional-150463 kubelet[5325]: E0205 02:14:02.678330    5325 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721642678111811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:188261,},InodesUsed:&UInt64Value{Value:93,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:14:12 functional-150463 kubelet[5325]: E0205 02:14:12.410708    5325 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Feb 05 02:14:12 functional-150463 kubelet[5325]: E0205 02:14:12.410789    5325 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Feb 05 02:14:12 functional-150463 kubelet[5325]: E0205 02:14:12.411101    5325 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ht4dn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-sv
c_default(e68f4c0c-eda3-4985-8b40-b36779e5155e): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Feb 05 02:14:12 functional-150463 kubelet[5325]: E0205 02:14:12.412366    5325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e68f4c0c-eda3-4985-8b40-b36779e5155e"
	Feb 05 02:14:12 functional-150463 kubelet[5325]: E0205 02:14:12.679489    5325 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721652679308407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:188261,},InodesUsed:&UInt64Value{Value:93,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:14:12 functional-150463 kubelet[5325]: E0205 02:14:12.679532    5325 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721652679308407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:188261,},InodesUsed:&UInt64Value{Value:93,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:14:15 functional-150463 kubelet[5325]: I0205 02:14:15.101429    5325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/deabf349-132c-4d15-91b9-39655fc5a5bc-test-volume\") pod \"deabf349-132c-4d15-91b9-39655fc5a5bc\" (UID: \"deabf349-132c-4d15-91b9-39655fc5a5bc\") "
	Feb 05 02:14:15 functional-150463 kubelet[5325]: I0205 02:14:15.101499    5325 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qphfx\" (UniqueName: \"kubernetes.io/projected/deabf349-132c-4d15-91b9-39655fc5a5bc-kube-api-access-qphfx\") pod \"deabf349-132c-4d15-91b9-39655fc5a5bc\" (UID: \"deabf349-132c-4d15-91b9-39655fc5a5bc\") "
	Feb 05 02:14:15 functional-150463 kubelet[5325]: I0205 02:14:15.101595    5325 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deabf349-132c-4d15-91b9-39655fc5a5bc-test-volume" (OuterVolumeSpecName: "test-volume") pod "deabf349-132c-4d15-91b9-39655fc5a5bc" (UID: "deabf349-132c-4d15-91b9-39655fc5a5bc"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Feb 05 02:14:15 functional-150463 kubelet[5325]: I0205 02:14:15.103390    5325 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deabf349-132c-4d15-91b9-39655fc5a5bc-kube-api-access-qphfx" (OuterVolumeSpecName: "kube-api-access-qphfx") pod "deabf349-132c-4d15-91b9-39655fc5a5bc" (UID: "deabf349-132c-4d15-91b9-39655fc5a5bc"). InnerVolumeSpecName "kube-api-access-qphfx". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Feb 05 02:14:15 functional-150463 kubelet[5325]: I0205 02:14:15.201998    5325 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qphfx\" (UniqueName: \"kubernetes.io/projected/deabf349-132c-4d15-91b9-39655fc5a5bc-kube-api-access-qphfx\") on node \"functional-150463\" DevicePath \"\""
	Feb 05 02:14:15 functional-150463 kubelet[5325]: I0205 02:14:15.202041    5325 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/deabf349-132c-4d15-91b9-39655fc5a5bc-test-volume\") on node \"functional-150463\" DevicePath \"\""
	Feb 05 02:14:15 functional-150463 kubelet[5325]: I0205 02:14:15.967738    5325 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c98cbca67b5bb5ad922ebdc7da2da66d9de8fc740601f99d4b7a131069df21e5"
	Feb 05 02:14:21 functional-150463 kubelet[5325]: I0205 02:14:21.284094    5325 memory_manager.go:355] "RemoveStaleState removing state" podUID="deabf349-132c-4d15-91b9-39655fc5a5bc" containerName="mount-munger"
	Feb 05 02:14:21 functional-150463 kubelet[5325]: I0205 02:14:21.345511    5325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc662\" (UniqueName: \"kubernetes.io/projected/5ad938cf-5295-46e5-a62f-69b747c68755-kube-api-access-fc662\") pod \"kubernetes-dashboard-7779f9b69b-dkbll\" (UID: \"5ad938cf-5295-46e5-a62f-69b747c68755\") " pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-dkbll"
	Feb 05 02:14:21 functional-150463 kubelet[5325]: I0205 02:14:21.345589    5325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5ad938cf-5295-46e5-a62f-69b747c68755-tmp-volume\") pod \"kubernetes-dashboard-7779f9b69b-dkbll\" (UID: \"5ad938cf-5295-46e5-a62f-69b747c68755\") " pod="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b-dkbll"
	Feb 05 02:14:21 functional-150463 kubelet[5325]: I0205 02:14:21.446381    5325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/29a0122f-da9c-408d-940c-0f3444201a22-tmp-volume\") pod \"dashboard-metrics-scraper-5d59dccf9b-mq6wm\" (UID: \"29a0122f-da9c-408d-940c-0f3444201a22\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-mq6wm"
	Feb 05 02:14:21 functional-150463 kubelet[5325]: I0205 02:14:21.446441    5325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9p7l\" (UniqueName: \"kubernetes.io/projected/29a0122f-da9c-408d-940c-0f3444201a22-kube-api-access-h9p7l\") pod \"dashboard-metrics-scraper-5d59dccf9b-mq6wm\" (UID: \"29a0122f-da9c-408d-940c-0f3444201a22\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-mq6wm"
	Feb 05 02:14:21 functional-150463 kubelet[5325]: W0205 02:14:21.610655    5325 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3/crio-48ec35f36e81469698e58a179620e916857b59dd44ab5c6fbf32db5c7853782d WatchSource:0}: Error finding container 48ec35f36e81469698e58a179620e916857b59dd44ab5c6fbf32db5c7853782d: Status 404 returned error can't find the container with id 48ec35f36e81469698e58a179620e916857b59dd44ab5c6fbf32db5c7853782d
	Feb 05 02:14:21 functional-150463 kubelet[5325]: W0205 02:14:21.664311    5325 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3/crio-33454ed6ccab965be768da552c9969d4b95fb5197a49c80b90ba227d2e90398c WatchSource:0}: Error finding container 33454ed6ccab965be768da552c9969d4b95fb5197a49c80b90ba227d2e90398c: Status 404 returned error can't find the container with id 33454ed6ccab965be768da552c9969d4b95fb5197a49c80b90ba227d2e90398c
	Feb 05 02:14:22 functional-150463 kubelet[5325]: E0205 02:14:22.680877    5325 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721662680676604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:197589,},InodesUsed:&UInt64Value{Value:99,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:14:22 functional-150463 kubelet[5325]: E0205 02:14:22.680914    5325 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738721662680676604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:197589,},InodesUsed:&UInt64Value{Value:99,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:14:27 functional-150463 kubelet[5325]: E0205 02:14:27.454649    5325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e68f4c0c-eda3-4985-8b40-b36779e5155e"
	
	
	==> storage-provisioner [6e971732adbb9ce3acc29d5afba546c9191a901c6ddbe2a8bb8b092f3fda5789] <==
	I0205 02:10:38.147479       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0205 02:10:38.155446       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0205 02:10:38.155493       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [c1cff4b786f8c376f923e458bf54e664b37267422ffc053e6c5c77e05adbde2c] <==
	I0205 02:10:56.946183       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0205 02:10:57.031482       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0205 02:10:57.031547       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0205 02:11:14.428114       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0205 02:11:14.428182       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"07992596-d1ff-442c-8e4e-a6d8cbbc4a4c", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-150463_1fa065fd-ae27-43ef-8859-a18448424289 became leader
	I0205 02:11:14.428269       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-150463_1fa065fd-ae27-43ef-8859-a18448424289!
	I0205 02:11:14.528713       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-150463_1fa065fd-ae27-43ef-8859-a18448424289!
	I0205 02:11:28.534186       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0205 02:11:28.534319       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"47580526-76b9-4ded-a4fc-9a25d88c05c6", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0205 02:11:28.534253       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    f6d7fb24-d290-48fa-9e55-9f5b97fc17f4 346 0 2025-02-05 02:09:57 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-02-05 02:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-47580526-76b9-4ded-a4fc-9a25d88c05c6 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  47580526-76b9-4ded-a4fc-9a25d88c05c6 704 0 2025-02-05 02:11:28 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-02-05 02:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-02-05 02:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0205 02:11:28.534740       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-47580526-76b9-4ded-a4fc-9a25d88c05c6" provisioned
	I0205 02:11:28.534763       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0205 02:11:28.534769       1 volume_store.go:212] Trying to save persistentvolume "pvc-47580526-76b9-4ded-a4fc-9a25d88c05c6"
	I0205 02:11:28.544660       1 volume_store.go:219] persistentvolume "pvc-47580526-76b9-4ded-a4fc-9a25d88c05c6" saved
	I0205 02:11:28.545809       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"47580526-76b9-4ded-a4fc-9a25d88c05c6", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-47580526-76b9-4ded-a4fc-9a25d88c05c6
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-150463 -n functional-150463
helpers_test.go:261: (dbg) Run:  kubectl --context functional-150463 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-t8j2q nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-mq6wm kubernetes-dashboard-7779f9b69b-dkbll
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-150463 describe pod busybox-mount mysql-58ccfd96bb-t8j2q nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-mq6wm kubernetes-dashboard-7779f9b69b-dkbll
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-150463 describe pod busybox-mount mysql-58ccfd96bb-t8j2q nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-mq6wm kubernetes-dashboard-7779f9b69b-dkbll: exit status 1 (88.614593ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-150463/192.168.49.2
	Start Time:       Wed, 05 Feb 2025 02:12:47 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://cd9697378be8d05b510ab1fd69a0011bdf37016a50df76c381002bd0eb384467
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 05 Feb 2025 02:14:13 +0000
	      Finished:     Wed, 05 Feb 2025 02:14:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qphfx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-qphfx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  103s  default-scheduler  Successfully assigned default/busybox-mount to functional-150463
	  Normal  Pulling    103s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     18s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.054s (1m25.446s including waiting). Image size: 4631262 bytes.
	  Normal  Created    18s   kubelet            Created container: mount-munger
	  Normal  Started    18s   kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-t8j2q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-150463/192.168.49.2
	Start Time:       Wed, 05 Feb 2025 02:11:21 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6p8t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-g6p8t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m9s                 default-scheduler  Successfully assigned default/mysql-58ccfd96bb-t8j2q to functional-150463
	  Warning  Failed     2m39s                kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     50s (x2 over 2m39s)  kubelet            Error: ErrImagePull
	  Warning  Failed     50s                  kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    37s (x2 over 2m38s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     37s (x2 over 2m38s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    24s (x3 over 3m9s)   kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-150463/192.168.49.2
	Start Time:       Wed, 05 Feb 2025 02:11:23 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ht4dn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ht4dn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m7s                 default-scheduler  Successfully assigned default/nginx-svc to functional-150463
	  Normal   Pulling    115s (x2 over 3m8s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     19s (x2 over 2m8s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     19s (x2 over 2m8s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x2 over 2m8s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4s (x2 over 2m8s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-150463/192.168.49.2
	Start Time:       Wed, 05 Feb 2025 02:11:28 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fm5r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-7fm5r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m2s                default-scheduler  Successfully assigned default/sp-pod to functional-150463
	  Warning  Failed     95s                 kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     95s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    95s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     95s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    81s (x2 over 3m2s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5d59dccf9b-mq6wm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-dkbll" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-150463 describe pod busybox-mount mysql-58ccfd96bb-t8j2q nginx-svc sp-pod dashboard-metrics-scraper-5d59dccf9b-mq6wm kubernetes-dashboard-7779f9b69b-dkbll: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (187.87s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-150463 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-t8j2q" [575799c9-d22c-4937-9da4-e3ac6f5deea5] Pending
helpers_test.go:344: "mysql-58ccfd96bb-t8j2q" [575799c9-d22c-4937-9da4-e3ac6f5deea5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
functional_test.go:1816: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1816: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-150463 -n functional-150463
functional_test.go:1816: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-02-05 02:21:22.175794554 +0000 UTC m=+1079.073087817
functional_test.go:1816: (dbg) Run:  kubectl --context functional-150463 describe po mysql-58ccfd96bb-t8j2q -n default
functional_test.go:1816: (dbg) kubectl --context functional-150463 describe po mysql-58ccfd96bb-t8j2q -n default:
Name:             mysql-58ccfd96bb-t8j2q
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-150463/192.168.49.2
Start Time:       Wed, 05 Feb 2025 02:11:21 +0000
Labels:           app=mysql
pod-template-hash=58ccfd96bb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/mysql-58ccfd96bb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6p8t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-g6p8t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-58ccfd96bb-t8j2q to functional-150463
Warning  Failed     9m30s                 kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    118s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     87s (x5 over 9m30s)   kubelet            Error: ErrImagePull
Warning  Failed     87s (x4 over 7m41s)   kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    13s (x16 over 9m29s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     13s (x16 over 9m29s)  kubelet            Error: ImagePullBackOff
functional_test.go:1816: (dbg) Run:  kubectl --context functional-150463 logs mysql-58ccfd96bb-t8j2q -n default
functional_test.go:1816: (dbg) Non-zero exit: kubectl --context functional-150463 logs mysql-58ccfd96bb-t8j2q -n default: exit status 1 (69.054146ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-58ccfd96bb-t8j2q" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1816: kubectl --context functional-150463 logs mysql-58ccfd96bb-t8j2q -n default: exit status 1
functional_test.go:1818: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-150463
helpers_test.go:235: (dbg) docker inspect functional-150463:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3",
	        "Created": "2025-02-05T02:09:38.244942193Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43246,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-05T02:09:38.357852235Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a",
	        "ResolvConfPath": "/var/lib/docker/containers/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3/hostname",
	        "HostsPath": "/var/lib/docker/containers/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3/hosts",
	        "LogPath": "/var/lib/docker/containers/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3-json.log",
	        "Name": "/functional-150463",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-150463:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-150463",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fee2a687b704529f4df42910f79cb67e46a4feca21ce975089681c8faf6fcdf6-init/diff:/var/lib/docker/overlay2/f186c7f5b5e3359a3aedb1825f83d9f64c1bd7ca8cd203398cd99d9b6a74d20a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fee2a687b704529f4df42910f79cb67e46a4feca21ce975089681c8faf6fcdf6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fee2a687b704529f4df42910f79cb67e46a4feca21ce975089681c8faf6fcdf6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fee2a687b704529f4df42910f79cb67e46a4feca21ce975089681c8faf6fcdf6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-150463",
	                "Source": "/var/lib/docker/volumes/functional-150463/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-150463",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-150463",
	                "name.minikube.sigs.k8s.io": "functional-150463",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d4d442a2b9547651fc8072acaa710780833e64d025702f78e300fc01fa417a2",
	            "SandboxKey": "/var/run/docker/netns/7d4d442a2b95",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-150463": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "fa6ceaab36f5a0e09832bac108f0625bd802864af516fc695b9c90ae04d836cf",
	                    "EndpointID": "d4f45c2bfc32b7a873b206653e19d0d3f5fcc672456687e8fa0e91c3a56ee2f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-150463",
	                        "1b91ea1b28c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-150463 -n functional-150463
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-150463 logs -n 25: (1.370000564s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-150463 ssh findmnt                                        | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | -T /mount-9p | grep 9p                                               |                   |         |         |                     |                     |
	| ssh            | functional-150463 ssh -- ls                                          | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | -la /mount-9p                                                        |                   |         |         |                     |                     |
	| ssh            | functional-150463 ssh sudo                                           | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|                | umount -f /mount-9p                                                  |                   |         |         |                     |                     |
	| ssh            | functional-150463 ssh findmnt                                        | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|                | -T /mount1                                                           |                   |         |         |                     |                     |
	| mount          | -p functional-150463                                                 | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup61532747/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| mount          | -p functional-150463                                                 | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup61532747/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| mount          | -p functional-150463                                                 | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup61532747/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| ssh            | functional-150463 ssh findmnt                                        | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | -T /mount1                                                           |                   |         |         |                     |                     |
	| ssh            | functional-150463 ssh findmnt                                        | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | -T /mount2                                                           |                   |         |         |                     |                     |
	| ssh            | functional-150463 ssh findmnt                                        | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | -T /mount3                                                           |                   |         |         |                     |                     |
	| mount          | -p functional-150463                                                 | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|                | --kill=true                                                          |                   |         |         |                     |                     |
	| start          | -p functional-150463                                                 | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|                | --dry-run --memory                                                   |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                              |                   |         |         |                     |                     |
	|                | --driver=docker                                                      |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                             |                   |         |         |                     |                     |
	| start          | -p functional-150463                                                 | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|                | --dry-run --memory                                                   |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                              |                   |         |         |                     |                     |
	|                | --driver=docker                                                      |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                             |                   |         |         |                     |                     |
	| start          | -p functional-150463                                                 | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|                | --dry-run --alsologtostderr                                          |                   |         |         |                     |                     |
	|                | -v=1 --driver=docker                                                 |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                             |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                   | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:16 UTC |
	|                | -p functional-150463                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                               |                   |         |         |                     |                     |
	| update-context | functional-150463                                                    | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| update-context | functional-150463                                                    | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| update-context | functional-150463                                                    | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| image          | functional-150463                                                    | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | image ls --format short                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-150463                                                    | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | image ls --format yaml                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| ssh            | functional-150463 ssh pgrep                                          | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC |                     |
	|                | buildkitd                                                            |                   |         |         |                     |                     |
	| image          | functional-150463 image build -t                                     | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | localhost/my-image:functional-150463                                 |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                     |                   |         |         |                     |                     |
	| image          | functional-150463 image ls                                           | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	| image          | functional-150463                                                    | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | image ls --format json                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-150463                                                    | functional-150463 | jenkins | v1.35.0 | 05 Feb 25 02:14 UTC | 05 Feb 25 02:14 UTC |
	|                | image ls --format table                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 02:14:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 02:14:20.118773   60365 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:14:20.118863   60365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:14:20.118871   60365 out.go:358] Setting ErrFile to fd 2...
	I0205 02:14:20.118875   60365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:14:20.119034   60365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
	I0205 02:14:20.119524   60365 out.go:352] Setting JSON to false
	I0205 02:14:20.120407   60365 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3406,"bootTime":1738718254,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:14:20.120505   60365 start.go:139] virtualization: kvm guest
	I0205 02:14:20.122588   60365 out.go:177] * [functional-150463] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 02:14:20.124199   60365 notify.go:220] Checking for updates...
	I0205 02:14:20.124230   60365 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 02:14:20.125783   60365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:14:20.127349   60365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	I0205 02:14:20.128928   60365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	I0205 02:14:20.130250   60365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 02:14:20.131560   60365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 02:14:20.133131   60365 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:14:20.133629   60365 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:14:20.155575   60365 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0205 02:14:20.155661   60365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:14:20.202080   60365 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-05 02:14:20.193450632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:14:20.202186   60365 docker.go:318] overlay module found
	I0205 02:14:20.203946   60365 out.go:177] * Using the docker driver based on existing profile
	I0205 02:14:20.205218   60365 start.go:297] selected driver: docker
	I0205 02:14:20.205232   60365 start.go:901] validating driver "docker" against &{Name:functional-150463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-150463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:14:20.205323   60365 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 02:14:20.205405   60365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:14:20.250857   60365 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-05 02:14:20.242372606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:14:20.251685   60365 cni.go:84] Creating CNI manager for ""
	I0205 02:14:20.251758   60365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0205 02:14:20.251816   60365 start.go:340] cluster config:
	{Name:functional-150463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-150463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:14:20.253860   60365 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Feb 05 02:19:55 functional-150463 crio[4958]: time="2025-02-05 02:19:55.454207235Z" level=info msg="Image docker.io/nginx:alpine not found" id=213d8ea1-56d7-44e4-9426-ce1380c74192 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:19:55 functional-150463 crio[4958]: time="2025-02-05 02:19:55.454687275Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=064e2cfb-6697-4d81-9544-3e5af4b36215 name=/runtime.v1.ImageService/PullImage
	Feb 05 02:19:55 functional-150463 crio[4958]: time="2025-02-05 02:19:55.455877581Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Feb 05 02:20:10 functional-150463 crio[4958]: time="2025-02-05 02:20:10.453458055Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=aa5ec09f-470a-4e15-8458-44728b612d63 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:20:10 functional-150463 crio[4958]: time="2025-02-05 02:20:10.453838657Z" level=info msg="Image docker.io/mysql:5.7 not found" id=aa5ec09f-470a-4e15-8458-44728b612d63 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:20:24 functional-150463 crio[4958]: time="2025-02-05 02:20:24.454079852Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=059c65a3-b723-4fde-ba82-f961130f8556 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:20:24 functional-150463 crio[4958]: time="2025-02-05 02:20:24.454338651Z" level=info msg="Image docker.io/mysql:5.7 not found" id=059c65a3-b723-4fde-ba82-f961130f8556 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:20:26 functional-150463 crio[4958]: time="2025-02-05 02:20:26.454191805Z" level=info msg="Pulling image: docker.io/nginx:latest" id=5b0a42d4-0d42-4b16-bf80-3682e4421eed name=/runtime.v1.ImageService/PullImage
	Feb 05 02:20:26 functional-150463 crio[4958]: time="2025-02-05 02:20:26.475010934Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Feb 05 02:20:38 functional-150463 crio[4958]: time="2025-02-05 02:20:38.453236995Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=a5190e5b-21f1-47a6-9262-3131d9567c43 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:20:38 functional-150463 crio[4958]: time="2025-02-05 02:20:38.453523147Z" level=info msg="Image docker.io/nginx:alpine not found" id=a5190e5b-21f1-47a6-9262-3131d9567c43 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:20:39 functional-150463 crio[4958]: time="2025-02-05 02:20:39.454047050Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=a9efef92-8e81-41a6-bcdf-0f6ed431e573 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:20:39 functional-150463 crio[4958]: time="2025-02-05 02:20:39.454345555Z" level=info msg="Image docker.io/mysql:5.7 not found" id=a9efef92-8e81-41a6-bcdf-0f6ed431e573 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:20:51 functional-150463 crio[4958]: time="2025-02-05 02:20:51.453887950Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7278bbe0-2370-43b1-9c24-dcc391ff7dc6 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:20:51 functional-150463 crio[4958]: time="2025-02-05 02:20:51.454166455Z" level=info msg="Image docker.io/nginx:alpine not found" id=7278bbe0-2370-43b1-9c24-dcc391ff7dc6 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:20:54 functional-150463 crio[4958]: time="2025-02-05 02:20:54.453664609Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=f2be809d-2647-417c-b258-464a37468202 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:20:54 functional-150463 crio[4958]: time="2025-02-05 02:20:54.453958385Z" level=info msg="Image docker.io/mysql:5.7 not found" id=f2be809d-2647-417c-b258-464a37468202 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:21:03 functional-150463 crio[4958]: time="2025-02-05 02:21:03.453909176Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=ad33558a-1095-4aa6-97cc-e667dc250b82 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:21:03 functional-150463 crio[4958]: time="2025-02-05 02:21:03.454135067Z" level=info msg="Image docker.io/nginx:alpine not found" id=ad33558a-1095-4aa6-97cc-e667dc250b82 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:21:09 functional-150463 crio[4958]: time="2025-02-05 02:21:09.453925517Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=4bad6eea-fe06-4313-b4b7-6258c4a032d0 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:21:09 functional-150463 crio[4958]: time="2025-02-05 02:21:09.454147188Z" level=info msg="Image docker.io/mysql:5.7 not found" id=4bad6eea-fe06-4313-b4b7-6258c4a032d0 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:21:16 functional-150463 crio[4958]: time="2025-02-05 02:21:16.453485216Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=63b4ddef-8a9d-41aa-a1e6-ef8225e697d4 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:21:16 functional-150463 crio[4958]: time="2025-02-05 02:21:16.453800658Z" level=info msg="Image docker.io/nginx:alpine not found" id=63b4ddef-8a9d-41aa-a1e6-ef8225e697d4 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:21:22 functional-150463 crio[4958]: time="2025-02-05 02:21:22.453412812Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=12309d9a-0c49-421f-930d-499529c3acc1 name=/runtime.v1.ImageService/ImageStatus
	Feb 05 02:21:22 functional-150463 crio[4958]: time="2025-02-05 02:21:22.453821252Z" level=info msg="Image docker.io/mysql:5.7 not found" id=12309d9a-0c49-421f-930d-499529c3acc1 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	c91a07145b35e       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   5 minutes ago       Running             dashboard-metrics-scraper   0                   33454ed6ccab9       dashboard-metrics-scraper-5d59dccf9b-mq6wm
	dc07e53bb83c7       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         5 minutes ago       Running             kubernetes-dashboard        0                   48ec35f36e814       kubernetes-dashboard-7779f9b69b-dkbll
	cd9697378be8d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              7 minutes ago       Exited              mount-munger                0                   c98cbca67b5bb       busybox-mount
	64b2d0fdcd77b       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 8 minutes ago       Running             echoserver                  0                   14078b01214e1       hello-node-fcfd88b6f-hctqr
	5133253b72618       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               8 minutes ago       Running             echoserver                  0                   209b08b607a1d       hello-node-connect-58f9cf68d8-9vzwq
	ec06c329b648e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 10 minutes ago      Running             coredns                     2                   dcccc61ea8905       coredns-668d6bf9bc-bqrbg
	87850cf31d3cd       d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56                                                 10 minutes ago      Running             kindnet-cni                 2                   08770c641feee       kindnet-mts45
	be2a92594c2a8       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                 10 minutes ago      Running             kube-proxy                  2                   c75032f167cc3       kube-proxy-snh97
	c1cff4b786f8c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         3                   ceb7bd0829f76       storage-provisioner
	f8ecd865b69e2       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                 10 minutes ago      Running             kube-apiserver              0                   bab50022e3c1e       kube-apiserver-functional-150463
	22c589e31dbb6       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 10 minutes ago      Running             etcd                        2                   3781e3fad15e8       etcd-functional-150463
	c3040259cf477       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                 10 minutes ago      Running             kube-scheduler              2                   d7b1f48a9c976       kube-scheduler-functional-150463
	84dda87c92b8d       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                 10 minutes ago      Running             kube-controller-manager     2                   ac70688bb0946       kube-controller-manager-functional-150463
	6e971732adbb9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Exited              storage-provisioner         2                   ceb7bd0829f76       storage-provisioner
	a379f4573d94b       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 11 minutes ago      Exited              etcd                        1                   3781e3fad15e8       etcd-functional-150463
	26456add96d27       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                 11 minutes ago      Exited              kube-scheduler              1                   d7b1f48a9c976       kube-scheduler-functional-150463
	ffbcf8d962014       d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56                                                 11 minutes ago      Exited              kindnet-cni                 1                   08770c641feee       kindnet-mts45
	d193a52476c7a       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                 11 minutes ago      Exited              kube-proxy                  1                   c75032f167cc3       kube-proxy-snh97
	feaa0863ed3f4       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                 11 minutes ago      Exited              kube-controller-manager     1                   ac70688bb0946       kube-controller-manager-functional-150463
	c31b8dfd67ffd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 11 minutes ago      Exited              coredns                     1                   dcccc61ea8905       coredns-668d6bf9bc-bqrbg
	
	
	==> coredns [c31b8dfd67ffdb1d7910aa5c697998e5b5000f47ee51bfd7c5e6423cff94177a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37674 - 62233 "HINFO IN 4582310575312474049.8829241919095050275. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.102100674s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ec06c329b648e613776528bf7cc8f7b2e9121ce33bf0d096da43b714b4ee2bc6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58767 - 45933 "HINFO IN 3679247596174586508.4741551725947178915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041586583s
	
	
	==> describe nodes <==
	Name:               functional-150463
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-150463
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d
	                    minikube.k8s.io/name=functional-150463
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_05T02_09_53_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 05 Feb 2025 02:09:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-150463
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 05 Feb 2025 02:21:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 05 Feb 2025 02:20:46 +0000   Wed, 05 Feb 2025 02:09:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 05 Feb 2025 02:20:46 +0000   Wed, 05 Feb 2025 02:09:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 05 Feb 2025 02:20:46 +0000   Wed, 05 Feb 2025 02:09:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 05 Feb 2025 02:20:46 +0000   Wed, 05 Feb 2025 02:10:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-150463
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	System Info:
	  Machine ID:                 4fe7387702c74aacb28a7655029ab8e7
	  System UUID:                5e3b5c6c-5533-449e-b66c-c44897437511
	  Boot ID:                    966de046-a0b5-476b-8b8b-9607817e1121
	  Kernel Version:             5.15.0-1075-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-9vzwq           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  default                     hello-node-fcfd88b6f-hctqr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	  default                     mysql-58ccfd96bb-t8j2q                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-668d6bf9bc-bqrbg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-150463                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-mts45                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-150463              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-150463     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-snh97                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-150463              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-mq6wm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m2s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-dkbll         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node functional-150463 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node functional-150463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node functional-150463 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           11m                node-controller  Node functional-150463 event: Registered Node functional-150463 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-150463 status is now: NodeReady
	  Normal   RegisteredNode           10m                node-controller  Node functional-150463 event: Registered Node functional-150463 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-150463 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-150463 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-150463 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-150463 event: Registered Node functional-150463 in Controller
	
	
	==> dmesg <==
	[  +0.623739] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023322] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.423487] kauditd_printk_skb: 46 callbacks suppressed
	[Feb 5 02:06] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[  +1.024110] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[  +2.015838] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[  +4.195567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[  +8.187239] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[ +16.126554] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[Feb 5 02:07] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 96 39 da c7 54 f7 96 e7 8e 69 81 d5 08 00
	[Feb 5 02:14] FS-Cache: Duplicate cookie detected
	[  +0.004758] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006757] FS-Cache: O-cookie d=0000000019113d10{9P.session} n=000000009f84c1f9
	[  +0.007541] FS-Cache: O-key=[10] '34323935373433323632'
	[  +0.005363] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006579] FS-Cache: N-cookie d=0000000019113d10{9P.session} n=0000000032d7f8bb
	[  +0.007540] FS-Cache: N-key=[10] '34323935373433323632'
	[ +15.091840] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [22c589e31dbb652d1d3b733f0d43c83c90b0bc7de9c859566aaca377f4c2d81c] <==
	{"level":"info","ts":"2025-02-05T02:10:53.427202Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-05T02:10:53.429694Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-05T02:10:53.429766Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-05T02:10:53.429883Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-05T02:10:53.430078Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-05T02:10:53.430098Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-05T02:10:54.756008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-05T02:10:54.756057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-05T02:10:54.756082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-05T02:10:54.756094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2025-02-05T02:10:54.756102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-02-05T02:10:54.756111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2025-02-05T02:10:54.756128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-02-05T02:10:54.758466Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-150463 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-05T02:10:54.758488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T02:10:54.758508Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T02:10:54.758744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-05T02:10:54.758854Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-05T02:10:54.759488Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T02:10:54.759483Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T02:10:54.760111Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-05T02:10:54.760658Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-05T02:20:54.775685Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1100}
	{"level":"info","ts":"2025-02-05T02:20:54.787811Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1100,"took":"11.799185ms","hash":488145334,"current-db-size-bytes":4165632,"current-db-size":"4.2 MB","current-db-size-in-use-bytes":1728512,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-02-05T02:20:54.787851Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":488145334,"revision":1100,"compact-revision":-1}
	
	
	==> etcd [a379f4573d94bccfdfb3e007eb503ac4b00cf1a852ce829a13a69d0642fc1d55] <==
	{"level":"info","ts":"2025-02-05T02:10:24.537502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-05T02:10:24.537519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-02-05T02:10:24.537530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2025-02-05T02:10:24.537535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-05T02:10:24.537575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2025-02-05T02:10:24.537587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-05T02:10:24.538602Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-150463 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-05T02:10:24.538608Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T02:10:24.538635Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-05T02:10:24.538858Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-05T02:10:24.538886Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-05T02:10:24.539325Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T02:10:24.539389Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-05T02:10:24.540161Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-05T02:10:24.540708Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-05T02:10:43.243055Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-05T02:10:43.243126Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-150463","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2025-02-05T02:10:43.243221Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T02:10:43.243322Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T02:10:43.261457Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-05T02:10:43.261519Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-05T02:10:43.261648Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-02-05T02:10:43.264186Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-05T02:10:43.264274Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-05T02:10:43.264284Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-150463","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 02:21:23 up  1:03,  0 users,  load average: 0.01, 0.19, 0.29
	Linux functional-150463 5.15.0-1075-gcp #84~20.04.1-Ubuntu SMP Thu Jan 16 20:44:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [87850cf31d3cd009084cd8319a827ea4beaa7ccdc066b26cd6320b7b960bce90] <==
	I0205 02:19:17.627346       1 main.go:301] handling current node
	I0205 02:19:27.626892       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:19:27.626929       1 main.go:301] handling current node
	I0205 02:19:37.627381       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:19:37.627416       1 main.go:301] handling current node
	I0205 02:19:47.626525       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:19:47.626562       1 main.go:301] handling current node
	I0205 02:19:57.626441       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:19:57.626739       1 main.go:301] handling current node
	I0205 02:20:07.627252       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:20:07.627300       1 main.go:301] handling current node
	I0205 02:20:17.626818       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:20:17.626878       1 main.go:301] handling current node
	I0205 02:20:27.626858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:20:27.626898       1 main.go:301] handling current node
	I0205 02:20:37.627266       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:20:37.627301       1 main.go:301] handling current node
	I0205 02:20:47.627050       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:20:47.627095       1 main.go:301] handling current node
	I0205 02:20:57.626590       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:20:57.626628       1 main.go:301] handling current node
	I0205 02:21:07.626561       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:21:07.626597       1 main.go:301] handling current node
	I0205 02:21:17.626614       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:21:17.626644       1 main.go:301] handling current node
	
	
	==> kindnet [ffbcf8d9620143677f05408ccd300266cda4ab13a2e6d86ac97d5fc2d7f3a011] <==
	I0205 02:10:22.930423       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0205 02:10:22.930811       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0205 02:10:22.931010       1 main.go:148] setting mtu 1500 for CNI 
	I0205 02:10:22.931061       1 main.go:178] kindnetd IP family: "ipv4"
	I0205 02:10:22.931096       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0205 02:10:23.346661       1 controller.go:361] Starting controller kube-network-policies
	I0205 02:10:23.346686       1 controller.go:365] Waiting for informer caches to sync
	I0205 02:10:23.346694       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0205 02:10:25.547480       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0205 02:10:25.547579       1 metrics.go:61] Registering metrics
	I0205 02:10:25.547660       1 controller.go:401] Syncing nftables rules
	I0205 02:10:33.347038       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0205 02:10:33.347115       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f8ecd865b69e20cfc31151c3d422fda55078a8a7ef5c99bff25276e120fa52cf] <==
	I0205 02:10:55.825934       1 aggregator.go:171] initial CRD sync complete...
	I0205 02:10:55.826135       1 autoregister_controller.go:144] Starting autoregister controller
	I0205 02:10:55.826457       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0205 02:10:55.826472       1 cache.go:39] Caches are synced for autoregister controller
	I0205 02:10:55.826146       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0205 02:10:55.826194       1 shared_informer.go:320] Caches are synced for configmaps
	I0205 02:10:55.831134       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0205 02:10:55.833212       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0205 02:10:56.534041       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0205 02:10:56.656900       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0205 02:10:57.436952       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0205 02:10:57.553485       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0205 02:10:57.628723       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0205 02:10:57.635749       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0205 02:10:59.043730       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0205 02:10:59.294542       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0205 02:10:59.343377       1 controller.go:615] quota admission added evaluator for: endpoints
	I0205 02:11:16.363654       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.240.33"}
	I0205 02:11:21.827754       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.84.19"}
	I0205 02:11:23.568223       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.201.232"}
	I0205 02:11:28.032378       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.5.178"}
	I0205 02:12:33.518106       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.28.220"}
	I0205 02:14:21.209882       1 controller.go:615] quota admission added evaluator for: namespaces
	I0205 02:14:21.358759       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.122.212"}
	I0205 02:14:21.372564       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.153.68"}
	
	
	==> kube-controller-manager [84dda87c92b8db5a5a9e0edf8ab3f1db9587a639c54a95ef1fe3befd8c43c2da] <==
	E0205 02:14:21.268789       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0205 02:14:21.270394       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="5.563099ms"
	E0205 02:14:21.270425       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0205 02:14:21.284431       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="12.385474ms"
	I0205 02:14:21.330965       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="46.484778ms"
	I0205 02:14:21.331087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="68.602µs"
	I0205 02:14:21.339338       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="13.553687ms"
	I0205 02:14:21.340598       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="43.094µs"
	I0205 02:14:21.345906       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="6.434598ms"
	I0205 02:14:21.345985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="43.014µs"
	I0205 02:14:21.349228       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="47.045µs"
	I0205 02:14:29.370249       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-150463"
	I0205 02:15:00.184707       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-150463"
	I0205 02:15:50.175420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="6.477124ms"
	I0205 02:15:50.175519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="61.129µs"
	I0205 02:15:52.182532       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="6.604445ms"
	I0205 02:15:52.182620       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="51.1µs"
	I0205 02:15:56.461611       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="88.619µs"
	I0205 02:16:01.234633       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-150463"
	I0205 02:16:07.481803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="73.577µs"
	I0205 02:18:06.463195       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="101.927µs"
	I0205 02:18:18.463075       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="88.132µs"
	I0205 02:20:10.464867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="135.449µs"
	I0205 02:20:24.462862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="64.555µs"
	I0205 02:20:46.202679       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-150463"
	
	
	==> kube-controller-manager [feaa0863ed3f4168288a693fef261fd0ecf288fd7172be51d8f2c3ad6e332d08] <==
	I0205 02:10:28.592572       1 shared_informer.go:320] Caches are synced for TTL
	I0205 02:10:28.592606       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0205 02:10:28.592619       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0205 02:10:28.592634       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0205 02:10:28.592666       1 shared_informer.go:320] Caches are synced for crt configmap
	I0205 02:10:28.592749       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0205 02:10:28.593690       1 shared_informer.go:320] Caches are synced for cronjob
	I0205 02:10:28.594932       1 shared_informer.go:320] Caches are synced for taint
	I0205 02:10:28.595065       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0205 02:10:28.595175       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-150463"
	I0205 02:10:28.595216       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0205 02:10:28.596278       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0205 02:10:28.596367       1 shared_informer.go:320] Caches are synced for resource quota
	I0205 02:10:28.597473       1 shared_informer.go:320] Caches are synced for resource quota
	I0205 02:10:28.598190       1 shared_informer.go:320] Caches are synced for job
	I0205 02:10:28.602277       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0205 02:10:28.693051       1 shared_informer.go:320] Caches are synced for garbage collector
	I0205 02:10:28.693081       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0205 02:10:28.693092       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0205 02:10:28.702680       1 shared_informer.go:320] Caches are synced for garbage collector
	I0205 02:10:28.900801       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="357.892708ms"
	I0205 02:10:28.900922       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="81.331µs"
	I0205 02:10:32.586088       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-150463"
	I0205 02:10:34.265238       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="6.428661ms"
	I0205 02:10:34.265365       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="84.796µs"
	
	
	==> kube-proxy [be2a92594c2a8a993ad881f90f94860172be7c624bbec024891da0fc1219537d] <==
	I0205 02:10:57.062476       1 server_linux.go:66] "Using iptables proxy"
	I0205 02:10:57.191106       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0205 02:10:57.191171       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0205 02:10:57.231342       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0205 02:10:57.231409       1 server_linux.go:170] "Using iptables Proxier"
	I0205 02:10:57.233279       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0205 02:10:57.233851       1 server.go:497] "Version info" version="v1.32.1"
	I0205 02:10:57.234072       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 02:10:57.235491       1 config.go:105] "Starting endpoint slice config controller"
	I0205 02:10:57.235541       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0205 02:10:57.235550       1 config.go:329] "Starting node config controller"
	I0205 02:10:57.235561       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0205 02:10:57.235592       1 config.go:199] "Starting service config controller"
	I0205 02:10:57.235603       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0205 02:10:57.335889       1 shared_informer.go:320] Caches are synced for node config
	I0205 02:10:57.335905       1 shared_informer.go:320] Caches are synced for service config
	I0205 02:10:57.335917       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d193a52476c7a254dbe7985bdf70ca3543cda9e32b4975f68b559b2a8d8ff4e7] <==
	I0205 02:10:22.740789       1 server_linux.go:66] "Using iptables proxy"
	E0205 02:10:22.954424       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-150463\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I0205 02:10:25.547762       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0205 02:10:25.547842       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0205 02:10:25.836453       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0205 02:10:25.836517       1 server_linux.go:170] "Using iptables Proxier"
	I0205 02:10:25.839083       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0205 02:10:25.839513       1 server.go:497] "Version info" version="v1.32.1"
	I0205 02:10:25.839553       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 02:10:25.841722       1 config.go:105] "Starting endpoint slice config controller"
	I0205 02:10:25.841810       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0205 02:10:25.841748       1 config.go:199] "Starting service config controller"
	I0205 02:10:25.841882       1 config.go:329] "Starting node config controller"
	I0205 02:10:25.841908       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0205 02:10:25.841976       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0205 02:10:25.942295       1 shared_informer.go:320] Caches are synced for service config
	I0205 02:10:25.942378       1 shared_informer.go:320] Caches are synced for node config
	I0205 02:10:25.942490       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [26456add96d27fe5e7ad5798e8f6f9829760d41136771bc410e4aff992926674] <==
	I0205 02:10:23.802961       1 serving.go:386] Generated self-signed cert in-memory
	I0205 02:10:25.651919       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0205 02:10:25.651958       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 02:10:25.735531       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0205 02:10:25.735663       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0205 02:10:25.735722       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0205 02:10:25.735811       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0205 02:10:25.735682       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0205 02:10:25.736014       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 02:10:25.735869       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0205 02:10:25.743075       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0205 02:10:25.736858       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0205 02:10:25.836040       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0205 02:10:25.843281       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0205 02:10:43.243242       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0205 02:10:43.243412       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0205 02:10:43.243549       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 02:10:43.243581       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0205 02:10:43.243755       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E0205 02:10:43.244649       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c3040259cf4773d34a84e579849b3c463837da1db975da812808f02d65cba28e] <==
	I0205 02:10:53.847027       1 serving.go:386] Generated self-signed cert in-memory
	W0205 02:10:55.682316       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0205 02:10:55.682366       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0205 02:10:55.682379       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0205 02:10:55.682391       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0205 02:10:55.743517       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0205 02:10:55.743544       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0205 02:10:55.745501       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0205 02:10:55.745566       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0205 02:10:55.745680       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0205 02:10:55.745723       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0205 02:10:55.845940       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 05 02:20:52 functional-150463 kubelet[5325]: E0205 02:20:52.659970    5325 manager.go:1116] Failed to create existing container: /crio-ac70688bb094668f57da51bd0486ab0cea0aaf66575cfa4378b0eceb3dcb2c02: Error finding container ac70688bb094668f57da51bd0486ab0cea0aaf66575cfa4378b0eceb3dcb2c02: Status 404 returned error can't find the container with id ac70688bb094668f57da51bd0486ab0cea0aaf66575cfa4378b0eceb3dcb2c02
	Feb 05 02:20:52 functional-150463 kubelet[5325]: E0205 02:20:52.660208    5325 manager.go:1116] Failed to create existing container: /docker/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3/crio-ac70688bb094668f57da51bd0486ab0cea0aaf66575cfa4378b0eceb3dcb2c02: Error finding container ac70688bb094668f57da51bd0486ab0cea0aaf66575cfa4378b0eceb3dcb2c02: Status 404 returned error can't find the container with id ac70688bb094668f57da51bd0486ab0cea0aaf66575cfa4378b0eceb3dcb2c02
	Feb 05 02:20:52 functional-150463 kubelet[5325]: E0205 02:20:52.660389    5325 manager.go:1116] Failed to create existing container: /docker/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3/crio-08770c641feeeabfb87181fe0de21f80770aa011a8d13d14dfb607cb59fcd128: Error finding container 08770c641feeeabfb87181fe0de21f80770aa011a8d13d14dfb607cb59fcd128: Status 404 returned error can't find the container with id 08770c641feeeabfb87181fe0de21f80770aa011a8d13d14dfb607cb59fcd128
	Feb 05 02:20:52 functional-150463 kubelet[5325]: E0205 02:20:52.660549    5325 manager.go:1116] Failed to create existing container: /docker/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3/crio-69afac71b95eac15d4f8290a02db97a09c253b7f133fef8af814d1501e198c1e: Error finding container 69afac71b95eac15d4f8290a02db97a09c253b7f133fef8af814d1501e198c1e: Status 404 returned error can't find the container with id 69afac71b95eac15d4f8290a02db97a09c253b7f133fef8af814d1501e198c1e
	Feb 05 02:20:52 functional-150463 kubelet[5325]: E0205 02:20:52.660707    5325 manager.go:1116] Failed to create existing container: /crio-3781e3fad15e8308730af7431c59105dde85ca8659d097f8a834f7343dec9c97: Error finding container 3781e3fad15e8308730af7431c59105dde85ca8659d097f8a834f7343dec9c97: Status 404 returned error can't find the container with id 3781e3fad15e8308730af7431c59105dde85ca8659d097f8a834f7343dec9c97
	Feb 05 02:20:52 functional-150463 kubelet[5325]: E0205 02:20:52.660883    5325 manager.go:1116] Failed to create existing container: /docker/1b91ea1b28c8a4a08022d883e8bde3def88b2727d2d8fc76771704b626740fa3/crio-dcccc61ea89057f13a3a920567b5dbef6e253f6878a68578593afc8fc1a3b997: Error finding container dcccc61ea89057f13a3a920567b5dbef6e253f6878a68578593afc8fc1a3b997: Status 404 returned error can't find the container with id dcccc61ea89057f13a3a920567b5dbef6e253f6878a68578593afc8fc1a3b997
	Feb 05 02:20:52 functional-150463 kubelet[5325]: E0205 02:20:52.737396    5325 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738722052737229378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243806,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:20:52 functional-150463 kubelet[5325]: E0205 02:20:52.737431    5325 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738722052737229378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243806,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:20:54 functional-150463 kubelet[5325]: E0205 02:20:54.454453    5325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-t8j2q" podUID="575799c9-d22c-4937-9da4-e3ac6f5deea5"
	Feb 05 02:20:57 functional-150463 kubelet[5325]: E0205 02:20:57.092276    5325 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Feb 05 02:20:57 functional-150463 kubelet[5325]: E0205 02:20:57.092334    5325 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Feb 05 02:20:57 functional-150463 kubelet[5325]: E0205 02:20:57.092431    5325 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fm5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(deb6ce88-cff6-4e4a-8ced-26424587b7f8): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Feb 05 02:20:57 functional-150463 kubelet[5325]: E0205 02:20:57.093609    5325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="deb6ce88-cff6-4e4a-8ced-26424587b7f8"
	Feb 05 02:21:02 functional-150463 kubelet[5325]: E0205 02:21:02.739083    5325 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738722062738903307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243806,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:21:02 functional-150463 kubelet[5325]: E0205 02:21:02.739123    5325 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738722062738903307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243806,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:21:03 functional-150463 kubelet[5325]: E0205 02:21:03.454398    5325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e68f4c0c-eda3-4985-8b40-b36779e5155e"
	Feb 05 02:21:09 functional-150463 kubelet[5325]: E0205 02:21:09.454426    5325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-t8j2q" podUID="575799c9-d22c-4937-9da4-e3ac6f5deea5"
	Feb 05 02:21:12 functional-150463 kubelet[5325]: E0205 02:21:12.453845    5325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="deb6ce88-cff6-4e4a-8ced-26424587b7f8"
	Feb 05 02:21:12 functional-150463 kubelet[5325]: E0205 02:21:12.740621    5325 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738722072740407757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243806,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:21:12 functional-150463 kubelet[5325]: E0205 02:21:12.740664    5325 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738722072740407757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243806,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:21:16 functional-150463 kubelet[5325]: E0205 02:21:16.454120    5325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="e68f4c0c-eda3-4985-8b40-b36779e5155e"
	Feb 05 02:21:22 functional-150463 kubelet[5325]: E0205 02:21:22.454088    5325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-t8j2q" podUID="575799c9-d22c-4937-9da4-e3ac6f5deea5"
	Feb 05 02:21:22 functional-150463 kubelet[5325]: E0205 02:21:22.742137    5325 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738722082741919654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243806,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:21:22 functional-150463 kubelet[5325]: E0205 02:21:22.742178    5325 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1738722082741919654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:243806,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 05 02:21:23 functional-150463 kubelet[5325]: E0205 02:21:23.453607    5325 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="deb6ce88-cff6-4e4a-8ced-26424587b7f8"
	
	
	==> kubernetes-dashboard [dc07e53bb83c78f76cdee784999ede3a2e1571b0ea9a4ff6e38fb111203c0e0d] <==
	2025/02/05 02:15:49 Using namespace: kubernetes-dashboard
	2025/02/05 02:15:49 Using in-cluster config to connect to apiserver
	2025/02/05 02:15:49 Using secret token for csrf signing
	2025/02/05 02:15:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/02/05 02:15:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/02/05 02:15:49 Successful initial request to the apiserver, version: v1.32.1
	2025/02/05 02:15:49 Generating JWE encryption key
	2025/02/05 02:15:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/02/05 02:15:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/02/05 02:15:49 Initializing JWE encryption key from synchronized object
	2025/02/05 02:15:49 Creating in-cluster Sidecar client
	2025/02/05 02:15:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/05 02:15:49 Serving insecurely on HTTP port: 9090
	2025/02/05 02:16:19 Successful request to sidecar
	2025/02/05 02:15:49 Starting overwatch
	
	
	==> storage-provisioner [6e971732adbb9ce3acc29d5afba546c9191a901c6ddbe2a8bb8b092f3fda5789] <==
	I0205 02:10:38.147479       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0205 02:10:38.155446       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0205 02:10:38.155493       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [c1cff4b786f8c376f923e458bf54e664b37267422ffc053e6c5c77e05adbde2c] <==
	I0205 02:10:56.946183       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0205 02:10:57.031482       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0205 02:10:57.031547       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0205 02:11:14.428114       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0205 02:11:14.428182       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"07992596-d1ff-442c-8e4e-a6d8cbbc4a4c", APIVersion:"v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-150463_1fa065fd-ae27-43ef-8859-a18448424289 became leader
	I0205 02:11:14.428269       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-150463_1fa065fd-ae27-43ef-8859-a18448424289!
	I0205 02:11:14.528713       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-150463_1fa065fd-ae27-43ef-8859-a18448424289!
	I0205 02:11:28.534186       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0205 02:11:28.534319       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"47580526-76b9-4ded-a4fc-9a25d88c05c6", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0205 02:11:28.534253       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    f6d7fb24-d290-48fa-9e55-9f5b97fc17f4 346 0 2025-02-05 02:09:57 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-02-05 02:09:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-47580526-76b9-4ded-a4fc-9a25d88c05c6 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  47580526-76b9-4ded-a4fc-9a25d88c05c6 704 0 2025-02-05 02:11:28 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-02-05 02:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-02-05 02:11:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0205 02:11:28.534740       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-47580526-76b9-4ded-a4fc-9a25d88c05c6" provisioned
	I0205 02:11:28.534763       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0205 02:11:28.534769       1 volume_store.go:212] Trying to save persistentvolume "pvc-47580526-76b9-4ded-a4fc-9a25d88c05c6"
	I0205 02:11:28.544660       1 volume_store.go:219] persistentvolume "pvc-47580526-76b9-4ded-a4fc-9a25d88c05c6" saved
	I0205 02:11:28.545809       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"47580526-76b9-4ded-a4fc-9a25d88c05c6", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-47580526-76b9-4ded-a4fc-9a25d88c05c6
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-150463 -n functional-150463
helpers_test.go:261: (dbg) Run:  kubectl --context functional-150463 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-t8j2q nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-150463 describe pod busybox-mount mysql-58ccfd96bb-t8j2q nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-150463 describe pod busybox-mount mysql-58ccfd96bb-t8j2q nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-150463/192.168.49.2
	Start Time:       Wed, 05 Feb 2025 02:12:47 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://cd9697378be8d05b510ab1fd69a0011bdf37016a50df76c381002bd0eb384467
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 05 Feb 2025 02:14:13 +0000
	      Finished:     Wed, 05 Feb 2025 02:14:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qphfx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-qphfx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  8m36s  default-scheduler  Successfully assigned default/busybox-mount to functional-150463
	  Normal  Pulling    8m36s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     7m11s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.054s (1m25.446s including waiting). Image size: 4631262 bytes.
	  Normal  Created    7m11s  kubelet            Created container: mount-munger
	  Normal  Started    7m11s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-t8j2q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-150463/192.168.49.2
	Start Time:       Wed, 05 Feb 2025 02:11:21 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6p8t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-g6p8t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-58ccfd96bb-t8j2q to functional-150463
	  Warning  Failed     9m32s                 kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m (x5 over 10m)      kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     89s (x5 over 9m32s)   kubelet            Error: ErrImagePull
	  Warning  Failed     89s (x4 over 7m43s)   kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     15s (x16 over 9m31s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2s (x17 over 9m31s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-150463/192.168.49.2
	Start Time:       Wed, 05 Feb 2025 02:11:23 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ht4dn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ht4dn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/nginx-svc to functional-150463
	  Normal   Pulling    89s (x5 over 10m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     58s (x5 over 9m1s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     58s (x5 over 9m1s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    8s (x15 over 9m1s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     8s (x15 over 9m1s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-150463/192.168.49.2
	Start Time:       Wed, 05 Feb 2025 02:11:28 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fm5r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-7fm5r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  9m55s                default-scheduler  Successfully assigned default/sp-pod to functional-150463
	  Normal   Pulling    58s (x5 over 9m55s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     27s (x5 over 8m28s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     27s (x5 over 8m28s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x13 over 8m28s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     1s (x13 over 8m28s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-150463 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e68f4c0c-eda3-4985-8b40-b36779e5155e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-150463 -n functional-150463
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-02-05 02:15:23.86061709 +0000 UTC m=+720.757910349
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-150463 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-150463 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-150463/192.168.49.2
Start Time:       Wed, 05 Feb 2025 02:11:23 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ht4dn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ht4dn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  4m                default-scheduler  Successfully assigned default/nginx-svc to functional-150463
Warning  Failed     71s (x2 over 3m)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     71s (x2 over 3m)  kubelet            Error: ErrImagePull
Normal   BackOff    56s (x2 over 3m)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     56s (x2 over 3m)  kubelet            Error: ImagePullBackOff
Normal   Pulling    45s (x3 over 4m)  kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-150463 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-150463 logs nginx-svc -n default: exit status 1 (61.285785ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-150463 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (104.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0205 02:15:23.982426   19390 retry.go:31] will retry after 3.469244099s: Temporary Error: Get "http:": http: no Host in request URL
I0205 02:15:27.451794   19390 retry.go:31] will retry after 3.290884994s: Temporary Error: Get "http:": http: no Host in request URL
I0205 02:15:30.743436   19390 retry.go:31] will retry after 6.010457017s: Temporary Error: Get "http:": http: no Host in request URL
I0205 02:15:36.754742   19390 retry.go:31] will retry after 8.125088188s: Temporary Error: Get "http:": http: no Host in request URL
E0205 02:15:36.830050   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
I0205 02:15:44.880917   19390 retry.go:31] will retry after 9.871011009s: Temporary Error: Get "http:": http: no Host in request URL
I0205 02:15:54.752286   19390 retry.go:31] will retry after 27.631768951s: Temporary Error: Get "http:": http: no Host in request URL
E0205 02:16:04.533738   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
I0205 02:16:22.384749   19390 retry.go:31] will retry after 45.9031254s: Temporary Error: Get "http:": http: no Host in request URL
2025/02/05 02:16:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-150463 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
nginx-svc   LoadBalancer   10.108.201.232   10.108.201.232   80:32647/TCP   5m45s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (104.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (266s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p flannel-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: exit status 80 (4m25.976724424s)

                                                
                                                
-- stdout --
	* [flannel-315000] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "flannel-315000" primary control-plane node in "flannel-315000" cluster
	* Pulling base image v0.0.46 ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Flannel (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 02:49:26.451154  292945 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:49:26.451310  292945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:49:26.451320  292945 out.go:358] Setting ErrFile to fd 2...
	I0205 02:49:26.451325  292945 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:49:26.451736  292945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
	I0205 02:49:26.452663  292945 out.go:352] Setting JSON to false
	I0205 02:49:26.454256  292945 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5512,"bootTime":1738718254,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:49:26.454395  292945 start.go:139] virtualization: kvm guest
	I0205 02:49:26.456552  292945 out.go:177] * [flannel-315000] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 02:49:26.458234  292945 notify.go:220] Checking for updates...
	I0205 02:49:26.458240  292945 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 02:49:26.459890  292945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:49:26.461282  292945 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	I0205 02:49:26.462738  292945 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	I0205 02:49:26.464057  292945 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 02:49:26.465335  292945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 02:49:26.467222  292945 config.go:182] Loaded profile config "custom-flannel-315000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:49:26.467392  292945 config.go:182] Loaded profile config "enable-default-cni-315000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:49:26.467514  292945 config.go:182] Loaded profile config "kubernetes-upgrade-925222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:49:26.467668  292945 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:49:26.499600  292945 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0205 02:49:26.499751  292945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:49:26.559264  292945 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:75 SystemTime:2025-02-05 02:49:26.547372913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:49:26.559578  292945 docker.go:318] overlay module found
	I0205 02:49:26.561292  292945 out.go:177] * Using the docker driver based on user configuration
	I0205 02:49:26.562840  292945 start.go:297] selected driver: docker
	I0205 02:49:26.562863  292945 start.go:901] validating driver "docker" against <nil>
	I0205 02:49:26.562875  292945 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 02:49:26.564053  292945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:49:26.625632  292945 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:75 SystemTime:2025-02-05 02:49:26.615841329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:49:26.625792  292945 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0205 02:49:26.626045  292945 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0205 02:49:26.628126  292945 out.go:177] * Using Docker driver with root privileges
	I0205 02:49:26.629631  292945 cni.go:84] Creating CNI manager for "flannel"
	I0205 02:49:26.629659  292945 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0205 02:49:26.629770  292945 start.go:340] cluster config:
	{Name:flannel-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-315000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPU
s: AutoPauseInterval:1m0s}
	I0205 02:49:26.631448  292945 out.go:177] * Starting "flannel-315000" primary control-plane node in "flannel-315000" cluster
	I0205 02:49:26.632485  292945 cache.go:121] Beginning downloading kic base image for docker with crio
	I0205 02:49:26.633818  292945 out.go:177] * Pulling base image v0.0.46 ...
	I0205 02:49:26.635158  292945 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 02:49:26.635214  292945 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 02:49:26.635226  292945 cache.go:56] Caching tarball of preloaded images
	I0205 02:49:26.635284  292945 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0205 02:49:26.635339  292945 preload.go:172] Found /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0205 02:49:26.635350  292945 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0205 02:49:26.635493  292945 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/config.json ...
	I0205 02:49:26.635524  292945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/config.json: {Name:mkbd69d4795ddc962fd9bf73d4ebe86574931bcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:49:26.661884  292945 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0205 02:49:26.661910  292945 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0205 02:49:26.661932  292945 cache.go:230] Successfully downloaded all kic artifacts
	I0205 02:49:26.661962  292945 start.go:360] acquireMachinesLock for flannel-315000: {Name:mk75afc15cc76065d504fe901b249d4ea4b33f18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0205 02:49:26.662086  292945 start.go:364] duration metric: took 101.685µs to acquireMachinesLock for "flannel-315000"
	I0205 02:49:26.662116  292945 start.go:93] Provisioning new machine with config: &{Name:flannel-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-315000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 02:49:26.662213  292945 start.go:125] createHost starting for "" (driver="docker")
	I0205 02:49:26.664371  292945 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0205 02:49:26.664694  292945 start.go:159] libmachine.API.Create for "flannel-315000" (driver="docker")
	I0205 02:49:26.664782  292945 client.go:168] LocalClient.Create starting
	I0205 02:49:26.664896  292945 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca.pem
	I0205 02:49:26.664944  292945 main.go:141] libmachine: Decoding PEM data...
	I0205 02:49:26.664959  292945 main.go:141] libmachine: Parsing certificate...
	I0205 02:49:26.665021  292945 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20363-12617/.minikube/certs/cert.pem
	I0205 02:49:26.665051  292945 main.go:141] libmachine: Decoding PEM data...
	I0205 02:49:26.665070  292945 main.go:141] libmachine: Parsing certificate...
	I0205 02:49:26.665434  292945 cli_runner.go:164] Run: docker network inspect flannel-315000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0205 02:49:26.684617  292945 cli_runner.go:211] docker network inspect flannel-315000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0205 02:49:26.684697  292945 network_create.go:284] running [docker network inspect flannel-315000] to gather additional debugging logs...
	I0205 02:49:26.684718  292945 cli_runner.go:164] Run: docker network inspect flannel-315000
	W0205 02:49:26.705339  292945 cli_runner.go:211] docker network inspect flannel-315000 returned with exit code 1
	I0205 02:49:26.705371  292945 network_create.go:287] error running [docker network inspect flannel-315000]: docker network inspect flannel-315000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network flannel-315000 not found
	I0205 02:49:26.705399  292945 network_create.go:289] output of [docker network inspect flannel-315000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network flannel-315000 not found
	
	** /stderr **
	I0205 02:49:26.705511  292945 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0205 02:49:26.728078  292945 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-86850cebc981 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:8b:ea:4c:15} reservation:<nil>}
	I0205 02:49:26.729130  292945 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-28fa197d2534 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:45:bc:32:d9} reservation:<nil>}
	I0205 02:49:26.730142  292945 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-75d307769e7c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:f8:64:06:ec} reservation:<nil>}
	I0205 02:49:26.731125  292945 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ccbdd69aa2cb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:3b:92:04:f5} reservation:<nil>}
	I0205 02:49:26.731905  292945 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-7346efb0857e IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:82:73:ad:3b} reservation:<nil>}
	I0205 02:49:26.732991  292945 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee6f40}
	I0205 02:49:26.733029  292945 network_create.go:124] attempt to create docker network flannel-315000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0205 02:49:26.733077  292945 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-315000 flannel-315000
	I0205 02:49:26.810345  292945 network_create.go:108] docker network flannel-315000 192.168.94.0/24 created
	I0205 02:49:26.810377  292945 kic.go:121] calculated static IP "192.168.94.2" for the "flannel-315000" container
	I0205 02:49:26.810438  292945 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0205 02:49:26.831523  292945 cli_runner.go:164] Run: docker volume create flannel-315000 --label name.minikube.sigs.k8s.io=flannel-315000 --label created_by.minikube.sigs.k8s.io=true
	I0205 02:49:26.854938  292945 oci.go:103] Successfully created a docker volume flannel-315000
	I0205 02:49:26.855024  292945 cli_runner.go:164] Run: docker run --rm --name flannel-315000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-315000 --entrypoint /usr/bin/test -v flannel-315000:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0205 02:49:27.432025  292945 oci.go:107] Successfully prepared a docker volume flannel-315000
	I0205 02:49:27.432078  292945 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 02:49:27.432102  292945 kic.go:194] Starting extracting preloaded images to volume ...
	I0205 02:49:27.432192  292945 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v flannel-315000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0205 02:49:32.442430  292945 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v flannel-315000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (5.010182859s)
	I0205 02:49:32.442465  292945 kic.go:203] duration metric: took 5.010359666s to extract preloaded images to volume ...
	W0205 02:49:32.442635  292945 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0205 02:49:32.442762  292945 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0205 02:49:32.500257  292945 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname flannel-315000 --name flannel-315000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-315000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=flannel-315000 --network flannel-315000 --ip 192.168.94.2 --volume flannel-315000:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0205 02:49:32.926054  292945 cli_runner.go:164] Run: docker container inspect flannel-315000 --format={{.State.Running}}
	I0205 02:49:32.951929  292945 cli_runner.go:164] Run: docker container inspect flannel-315000 --format={{.State.Status}}
	I0205 02:49:32.975591  292945 cli_runner.go:164] Run: docker exec flannel-315000 stat /var/lib/dpkg/alternatives/iptables
	I0205 02:49:33.029347  292945 oci.go:144] the created container "flannel-315000" has a running status.
	I0205 02:49:33.029386  292945 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20363-12617/.minikube/machines/flannel-315000/id_rsa...
	I0205 02:49:33.224888  292945 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20363-12617/.minikube/machines/flannel-315000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0205 02:49:33.285803  292945 cli_runner.go:164] Run: docker container inspect flannel-315000 --format={{.State.Status}}
	I0205 02:49:33.322277  292945 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0205 02:49:33.322302  292945 kic_runner.go:114] Args: [docker exec --privileged flannel-315000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0205 02:49:33.455964  292945 cli_runner.go:164] Run: docker container inspect flannel-315000 --format={{.State.Status}}
	I0205 02:49:33.527197  292945 machine.go:93] provisionDockerMachine start ...
	I0205 02:49:33.527292  292945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-315000
	I0205 02:49:33.573734  292945 main.go:141] libmachine: Using SSH client type: native
	I0205 02:49:33.574026  292945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0205 02:49:33.574042  292945 main.go:141] libmachine: About to run SSH command:
	hostname
	I0205 02:49:33.770265  292945 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-315000
	
	I0205 02:49:33.770302  292945 ubuntu.go:169] provisioning hostname "flannel-315000"
	I0205 02:49:33.770369  292945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-315000
	I0205 02:49:33.809508  292945 main.go:141] libmachine: Using SSH client type: native
	I0205 02:49:33.809769  292945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0205 02:49:33.809790  292945 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-315000 && echo "flannel-315000" | sudo tee /etc/hostname
	I0205 02:49:33.995718  292945 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-315000
	
	I0205 02:49:33.995841  292945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-315000
	I0205 02:49:34.015838  292945 main.go:141] libmachine: Using SSH client type: native
	I0205 02:49:34.016028  292945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0205 02:49:34.016049  292945 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-315000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-315000/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-315000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0205 02:49:34.179367  292945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0205 02:49:34.179400  292945 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20363-12617/.minikube CaCertPath:/home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20363-12617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20363-12617/.minikube}
	I0205 02:49:34.179424  292945 ubuntu.go:177] setting up certificates
	I0205 02:49:34.179436  292945 provision.go:84] configureAuth start
	I0205 02:49:34.179493  292945 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-315000
	I0205 02:49:34.202485  292945 provision.go:143] copyHostCerts
	I0205 02:49:34.202560  292945 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12617/.minikube/ca.pem, removing ...
	I0205 02:49:34.202573  292945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12617/.minikube/ca.pem
	I0205 02:49:34.202651  292945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20363-12617/.minikube/ca.pem (1078 bytes)
	I0205 02:49:34.202756  292945 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12617/.minikube/cert.pem, removing ...
	I0205 02:49:34.202767  292945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12617/.minikube/cert.pem
	I0205 02:49:34.202799  292945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20363-12617/.minikube/cert.pem (1123 bytes)
	I0205 02:49:34.202868  292945 exec_runner.go:144] found /home/jenkins/minikube-integration/20363-12617/.minikube/key.pem, removing ...
	I0205 02:49:34.202877  292945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20363-12617/.minikube/key.pem
	I0205 02:49:34.202906  292945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20363-12617/.minikube/key.pem (1679 bytes)
	I0205 02:49:34.202970  292945 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20363-12617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca-key.pem org=jenkins.flannel-315000 san=[127.0.0.1 192.168.94.2 flannel-315000 localhost minikube]
	I0205 02:49:34.408510  292945 provision.go:177] copyRemoteCerts
	I0205 02:49:34.408583  292945 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0205 02:49:34.408631  292945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-315000
	I0205 02:49:34.428000  292945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/flannel-315000/id_rsa Username:docker}
	I0205 02:49:34.527301  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0205 02:49:34.560735  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0205 02:49:34.588512  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0205 02:49:34.614769  292945 provision.go:87] duration metric: took 435.316114ms to configureAuth
	I0205 02:49:34.614798  292945 ubuntu.go:193] setting minikube options for container-runtime
	I0205 02:49:34.615058  292945 config.go:182] Loaded profile config "flannel-315000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:49:34.615164  292945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-315000
	I0205 02:49:34.641819  292945 main.go:141] libmachine: Using SSH client type: native
	I0205 02:49:34.642063  292945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0205 02:49:34.642090  292945 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0205 02:49:34.889087  292945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0205 02:49:34.889117  292945 machine.go:96] duration metric: took 1.361897872s to provisionDockerMachine
	I0205 02:49:34.889131  292945 client.go:171] duration metric: took 8.224337489s to LocalClient.Create
	I0205 02:49:34.889153  292945 start.go:167] duration metric: took 8.224461283s to libmachine.API.Create "flannel-315000"
	I0205 02:49:34.889163  292945 start.go:293] postStartSetup for "flannel-315000" (driver="docker")
	I0205 02:49:34.889176  292945 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0205 02:49:34.889247  292945 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0205 02:49:34.889301  292945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-315000
	I0205 02:49:34.911867  292945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/flannel-315000/id_rsa Username:docker}
	I0205 02:49:35.007781  292945 ssh_runner.go:195] Run: cat /etc/os-release
	I0205 02:49:35.011474  292945 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0205 02:49:35.011518  292945 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0205 02:49:35.011532  292945 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0205 02:49:35.011540  292945 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0205 02:49:35.011556  292945 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12617/.minikube/addons for local assets ...
	I0205 02:49:35.011634  292945 filesync.go:126] Scanning /home/jenkins/minikube-integration/20363-12617/.minikube/files for local assets ...
	I0205 02:49:35.011747  292945 filesync.go:149] local asset: /home/jenkins/minikube-integration/20363-12617/.minikube/files/etc/ssl/certs/193902.pem -> 193902.pem in /etc/ssl/certs
	I0205 02:49:35.011878  292945 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0205 02:49:35.021814  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/files/etc/ssl/certs/193902.pem --> /etc/ssl/certs/193902.pem (1708 bytes)
	I0205 02:49:35.049045  292945 start.go:296] duration metric: took 159.867974ms for postStartSetup
	I0205 02:49:35.049401  292945 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-315000
	I0205 02:49:35.070637  292945 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/config.json ...
	I0205 02:49:35.070906  292945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0205 02:49:35.070953  292945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-315000
	I0205 02:49:35.092032  292945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/flannel-315000/id_rsa Username:docker}
	I0205 02:49:35.186791  292945 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0205 02:49:35.191622  292945 start.go:128] duration metric: took 8.529388752s to createHost
	I0205 02:49:35.191653  292945 start.go:83] releasing machines lock for "flannel-315000", held for 8.529552425s
	I0205 02:49:35.191739  292945 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" flannel-315000
	I0205 02:49:35.212762  292945 ssh_runner.go:195] Run: cat /version.json
	I0205 02:49:35.212835  292945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-315000
	I0205 02:49:35.213047  292945 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0205 02:49:35.213124  292945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-315000
	I0205 02:49:35.235763  292945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/flannel-315000/id_rsa Username:docker}
	I0205 02:49:35.239797  292945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/flannel-315000/id_rsa Username:docker}
	I0205 02:49:35.429650  292945 ssh_runner.go:195] Run: systemctl --version
	I0205 02:49:35.434510  292945 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0205 02:49:35.579508  292945 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0205 02:49:35.584478  292945 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 02:49:35.605127  292945 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0205 02:49:35.605212  292945 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0205 02:49:35.639069  292945 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0205 02:49:35.639098  292945 start.go:495] detecting cgroup driver to use...
	I0205 02:49:35.639135  292945 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0205 02:49:35.639204  292945 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0205 02:49:35.655471  292945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0205 02:49:35.667570  292945 docker.go:217] disabling cri-docker service (if available) ...
	I0205 02:49:35.667630  292945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0205 02:49:35.682604  292945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0205 02:49:35.698623  292945 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0205 02:49:35.800806  292945 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0205 02:49:35.894981  292945 docker.go:233] disabling docker service ...
	I0205 02:49:35.895045  292945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0205 02:49:35.918640  292945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0205 02:49:35.932178  292945 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0205 02:49:36.033502  292945 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0205 02:49:36.127556  292945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0205 02:49:36.140962  292945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0205 02:49:36.162212  292945 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0205 02:49:36.162275  292945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:49:36.172875  292945 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0205 02:49:36.172954  292945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:49:36.185535  292945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:49:36.196185  292945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:49:36.206491  292945 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0205 02:49:36.216150  292945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:49:36.226438  292945 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:49:36.242961  292945 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0205 02:49:36.253227  292945 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0205 02:49:36.262523  292945 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0205 02:49:36.271953  292945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 02:49:36.350339  292945 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0205 02:49:36.711492  292945 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0205 02:49:36.711572  292945 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0205 02:49:36.715773  292945 start.go:563] Will wait 60s for crictl version
	I0205 02:49:36.715841  292945 ssh_runner.go:195] Run: which crictl
	I0205 02:49:36.719742  292945 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0205 02:49:36.761077  292945 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0205 02:49:36.761162  292945 ssh_runner.go:195] Run: crio --version
	I0205 02:49:36.809424  292945 ssh_runner.go:195] Run: crio --version
	I0205 02:49:36.852098  292945 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	I0205 02:49:36.853442  292945 cli_runner.go:164] Run: docker network inspect flannel-315000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0205 02:49:36.873057  292945 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0205 02:49:36.877386  292945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 02:49:36.889657  292945 kubeadm.go:883] updating cluster {Name:flannel-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-315000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0205 02:49:36.889833  292945 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 02:49:36.889911  292945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 02:49:36.976765  292945 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 02:49:36.976787  292945 crio.go:433] Images already preloaded, skipping extraction
	I0205 02:49:36.976832  292945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0205 02:49:37.016250  292945 crio.go:514] all images are preloaded for cri-o runtime.
	I0205 02:49:37.016282  292945 cache_images.go:84] Images are preloaded, skipping loading
	I0205 02:49:37.016292  292945 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.1 crio true true} ...
	I0205 02:49:37.016418  292945 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=flannel-315000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:flannel-315000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0205 02:49:37.016506  292945 ssh_runner.go:195] Run: crio config
	I0205 02:49:37.065575  292945 cni.go:84] Creating CNI manager for "flannel"
	I0205 02:49:37.065608  292945 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0205 02:49:37.065632  292945 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-315000 NodeName:flannel-315000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0205 02:49:37.065766  292945 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-315000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0205 02:49:37.065828  292945 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0205 02:49:37.076471  292945 binaries.go:44] Found k8s binaries, skipping transfer
	I0205 02:49:37.076538  292945 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0205 02:49:37.086025  292945 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I0205 02:49:37.105878  292945 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0205 02:49:37.126607  292945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2288 bytes)
	I0205 02:49:37.145981  292945 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0205 02:49:37.149604  292945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0205 02:49:37.161113  292945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 02:49:37.243174  292945 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 02:49:37.258910  292945 certs.go:68] Setting up /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000 for IP: 192.168.94.2
	I0205 02:49:37.258935  292945 certs.go:194] generating shared ca certs ...
	I0205 02:49:37.258951  292945 certs.go:226] acquiring lock for ca certs: {Name:mkf47158da08358d0aa679f4aa239783b5be6e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:49:37.259126  292945 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20363-12617/.minikube/ca.key
	I0205 02:49:37.259178  292945 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20363-12617/.minikube/proxy-client-ca.key
	I0205 02:49:37.259193  292945 certs.go:256] generating profile certs ...
	I0205 02:49:37.259261  292945 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/client.key
	I0205 02:49:37.259292  292945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/client.crt with IP's: []
	I0205 02:49:37.585725  292945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/client.crt ...
	I0205 02:49:37.585758  292945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/client.crt: {Name:mk74d86b38adfa5668a4deb8185dd2892c393ef3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:49:37.585986  292945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/client.key ...
	I0205 02:49:37.586007  292945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/client.key: {Name:mkf24044ad7dde9b46a25df851e5be210f94c56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:49:37.586140  292945 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/apiserver.key.a76f22b8
	I0205 02:49:37.586170  292945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/apiserver.crt.a76f22b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0205 02:49:37.769669  292945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/apiserver.crt.a76f22b8 ...
	I0205 02:49:37.769695  292945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/apiserver.crt.a76f22b8: {Name:mk07c6d21a5a75094688392fd90c0bcecbf06df9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:49:37.769917  292945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/apiserver.key.a76f22b8 ...
	I0205 02:49:37.769946  292945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/apiserver.key.a76f22b8: {Name:mk41b24da961243219204e80a68184fde2e2bd87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:49:37.770081  292945 certs.go:381] copying /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/apiserver.crt.a76f22b8 -> /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/apiserver.crt
	I0205 02:49:37.770206  292945 certs.go:385] copying /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/apiserver.key.a76f22b8 -> /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/apiserver.key
	I0205 02:49:37.770278  292945 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/proxy-client.key
	I0205 02:49:37.770293  292945 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/proxy-client.crt with IP's: []
	I0205 02:49:37.910595  292945 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/proxy-client.crt ...
	I0205 02:49:37.910639  292945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/proxy-client.crt: {Name:mkc20fe9cb977af6425a3bf0c5e828faa04448bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:49:37.910845  292945 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/proxy-client.key ...
	I0205 02:49:37.910860  292945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/proxy-client.key: {Name:mk1be6bd3dad423161aa69239a7cab3f7b8f7a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:49:37.911041  292945 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/19390.pem (1338 bytes)
	W0205 02:49:37.911081  292945 certs.go:480] ignoring /home/jenkins/minikube-integration/20363-12617/.minikube/certs/19390_empty.pem, impossibly tiny 0 bytes
	I0205 02:49:37.911093  292945 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca-key.pem (1675 bytes)
	I0205 02:49:37.911118  292945 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/ca.pem (1078 bytes)
	I0205 02:49:37.911141  292945 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/cert.pem (1123 bytes)
	I0205 02:49:37.911162  292945 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12617/.minikube/certs/key.pem (1679 bytes)
	I0205 02:49:37.911198  292945 certs.go:484] found cert: /home/jenkins/minikube-integration/20363-12617/.minikube/files/etc/ssl/certs/193902.pem (1708 bytes)
	I0205 02:49:37.911884  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0205 02:49:37.937595  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0205 02:49:37.964774  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0205 02:49:37.992458  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0205 02:49:38.019906  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0205 02:49:38.048041  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0205 02:49:38.073007  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0205 02:49:38.097663  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/flannel-315000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0205 02:49:38.123499  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/files/etc/ssl/certs/193902.pem --> /usr/share/ca-certificates/193902.pem (1708 bytes)
	I0205 02:49:38.148484  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0205 02:49:38.173210  292945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20363-12617/.minikube/certs/19390.pem --> /usr/share/ca-certificates/19390.pem (1338 bytes)
	I0205 02:49:38.197932  292945 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0205 02:49:38.216247  292945 ssh_runner.go:195] Run: openssl version
	I0205 02:49:38.221795  292945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/193902.pem && ln -fs /usr/share/ca-certificates/193902.pem /etc/ssl/certs/193902.pem"
	I0205 02:49:38.231795  292945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/193902.pem
	I0205 02:49:38.235668  292945 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb  5 02:09 /usr/share/ca-certificates/193902.pem
	I0205 02:49:38.235727  292945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/193902.pem
	I0205 02:49:38.242645  292945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/193902.pem /etc/ssl/certs/3ec20f2e.0"
	I0205 02:49:38.252694  292945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0205 02:49:38.262628  292945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0205 02:49:38.266343  292945 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb  5 02:04 /usr/share/ca-certificates/minikubeCA.pem
	I0205 02:49:38.266402  292945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0205 02:49:38.273247  292945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0205 02:49:38.283435  292945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19390.pem && ln -fs /usr/share/ca-certificates/19390.pem /etc/ssl/certs/19390.pem"
	I0205 02:49:38.293401  292945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19390.pem
	I0205 02:49:38.297081  292945 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb  5 02:09 /usr/share/ca-certificates/19390.pem
	I0205 02:49:38.297156  292945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19390.pem
	I0205 02:49:38.304044  292945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19390.pem /etc/ssl/certs/51391683.0"
	I0205 02:49:38.313932  292945 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0205 02:49:38.317218  292945 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0205 02:49:38.317283  292945 kubeadm.go:392] StartCluster: {Name:flannel-315000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-315000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DN
SDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:49:38.317375  292945 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0205 02:49:38.317415  292945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0205 02:49:38.353523  292945 cri.go:89] found id: ""
	I0205 02:49:38.353618  292945 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0205 02:49:38.362492  292945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0205 02:49:38.371516  292945 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0205 02:49:38.371578  292945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0205 02:49:38.380690  292945 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0205 02:49:38.380710  292945 kubeadm.go:157] found existing configuration files:
	
	I0205 02:49:38.380750  292945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0205 02:49:38.390171  292945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0205 02:49:38.390246  292945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0205 02:49:38.399168  292945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0205 02:49:38.408225  292945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0205 02:49:38.408294  292945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0205 02:49:38.417647  292945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0205 02:49:38.427162  292945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0205 02:49:38.427214  292945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0205 02:49:38.436203  292945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0205 02:49:38.445292  292945 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0205 02:49:38.445347  292945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0205 02:49:38.454510  292945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0205 02:49:38.514726  292945 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0205 02:49:38.515099  292945 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-gcp\n", err: exit status 1
	I0205 02:49:38.576135  292945 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0205 02:49:47.406733  292945 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0205 02:49:47.406839  292945 kubeadm.go:310] [preflight] Running pre-flight checks
	I0205 02:49:47.406983  292945 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0205 02:49:47.407063  292945 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-gcp
	I0205 02:49:47.407125  292945 kubeadm.go:310] OS: Linux
	I0205 02:49:47.407195  292945 kubeadm.go:310] CGROUPS_CPU: enabled
	I0205 02:49:47.407269  292945 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0205 02:49:47.407343  292945 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0205 02:49:47.407407  292945 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0205 02:49:47.407478  292945 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0205 02:49:47.407543  292945 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0205 02:49:47.407611  292945 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0205 02:49:47.407676  292945 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0205 02:49:47.407748  292945 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0205 02:49:47.407865  292945 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0205 02:49:47.407991  292945 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0205 02:49:47.408137  292945 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0205 02:49:47.408233  292945 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0205 02:49:47.410097  292945 out.go:235]   - Generating certificates and keys ...
	I0205 02:49:47.410200  292945 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0205 02:49:47.410295  292945 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0205 02:49:47.410393  292945 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0205 02:49:47.410484  292945 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0205 02:49:47.410566  292945 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0205 02:49:47.410609  292945 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0205 02:49:47.410651  292945 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0205 02:49:47.410823  292945 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-315000 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0205 02:49:47.410900  292945 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0205 02:49:47.411064  292945 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-315000 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0205 02:49:47.411165  292945 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0205 02:49:47.411240  292945 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0205 02:49:47.411307  292945 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0205 02:49:47.411362  292945 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0205 02:49:47.411412  292945 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0205 02:49:47.411499  292945 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0205 02:49:47.411601  292945 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0205 02:49:47.411663  292945 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0205 02:49:47.411735  292945 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0205 02:49:47.411874  292945 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0205 02:49:47.411969  292945 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0205 02:49:47.414665  292945 out.go:235]   - Booting up control plane ...
	I0205 02:49:47.414779  292945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0205 02:49:47.414884  292945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0205 02:49:47.414980  292945 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0205 02:49:47.415111  292945 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0205 02:49:47.415245  292945 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0205 02:49:47.415291  292945 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0205 02:49:47.415449  292945 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0205 02:49:47.415583  292945 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0205 02:49:47.415669  292945 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.878542ms
	I0205 02:49:47.415764  292945 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0205 02:49:47.415859  292945 kubeadm.go:310] [api-check] The API server is healthy after 4.502189797s
	I0205 02:49:47.416017  292945 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0205 02:49:47.416213  292945 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0205 02:49:47.416285  292945 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0205 02:49:47.416487  292945 kubeadm.go:310] [mark-control-plane] Marking the node flannel-315000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0205 02:49:47.416543  292945 kubeadm.go:310] [bootstrap-token] Using token: 6akg4u.0b2y23ay1fjn3hvj
	I0205 02:49:47.418247  292945 out.go:235]   - Configuring RBAC rules ...
	I0205 02:49:47.418398  292945 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0205 02:49:47.418524  292945 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0205 02:49:47.418689  292945 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0205 02:49:47.418856  292945 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0205 02:49:47.419001  292945 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0205 02:49:47.419078  292945 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0205 02:49:47.419184  292945 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0205 02:49:47.419229  292945 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0205 02:49:47.419271  292945 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0205 02:49:47.419277  292945 kubeadm.go:310] 
	I0205 02:49:47.419326  292945 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0205 02:49:47.419335  292945 kubeadm.go:310] 
	I0205 02:49:47.419404  292945 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0205 02:49:47.419410  292945 kubeadm.go:310] 
	I0205 02:49:47.419437  292945 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0205 02:49:47.419485  292945 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0205 02:49:47.419527  292945 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0205 02:49:47.419536  292945 kubeadm.go:310] 
	I0205 02:49:47.419586  292945 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0205 02:49:47.419592  292945 kubeadm.go:310] 
	I0205 02:49:47.419634  292945 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0205 02:49:47.419640  292945 kubeadm.go:310] 
	I0205 02:49:47.419688  292945 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0205 02:49:47.419770  292945 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0205 02:49:47.419836  292945 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0205 02:49:47.419843  292945 kubeadm.go:310] 
	I0205 02:49:47.419910  292945 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0205 02:49:47.419971  292945 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0205 02:49:47.419977  292945 kubeadm.go:310] 
	I0205 02:49:47.420047  292945 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6akg4u.0b2y23ay1fjn3hvj \
	I0205 02:49:47.420159  292945 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4f5b0b470d86181f8e656721a5e49e4a405b9f662421ec1e549cfda981306944 \
	I0205 02:49:47.420205  292945 kubeadm.go:310] 	--control-plane 
	I0205 02:49:47.420217  292945 kubeadm.go:310] 
	I0205 02:49:47.420308  292945 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0205 02:49:47.420317  292945 kubeadm.go:310] 
	I0205 02:49:47.420388  292945 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6akg4u.0b2y23ay1fjn3hvj \
	I0205 02:49:47.420485  292945 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4f5b0b470d86181f8e656721a5e49e4a405b9f662421ec1e549cfda981306944 
	I0205 02:49:47.420497  292945 cni.go:84] Creating CNI manager for "flannel"
	I0205 02:49:47.422264  292945 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0205 02:49:47.423559  292945 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0205 02:49:47.427712  292945 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0205 02:49:47.427737  292945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0205 02:49:47.447489  292945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0205 02:49:47.770770  292945 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0205 02:49:47.770853  292945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:49:47.770904  292945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-315000 minikube.k8s.io/updated_at=2025_02_05T02_49_47_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=e14c106b6cd223b80b8f7425af26284a33f75c3d minikube.k8s.io/name=flannel-315000 minikube.k8s.io/primary=true
	I0205 02:49:47.778749  292945 ops.go:34] apiserver oom_adj: -16
	I0205 02:49:47.950346  292945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:49:48.450835  292945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:49:48.950822  292945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:49:49.450829  292945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:49:49.950428  292945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:49:50.450396  292945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:49:50.950821  292945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:49:51.450537  292945 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0205 02:49:51.524176  292945 kubeadm.go:1113] duration metric: took 3.753384771s to wait for elevateKubeSystemPrivileges
	I0205 02:49:51.524207  292945 kubeadm.go:394] duration metric: took 13.206930657s to StartCluster
	I0205 02:49:51.524229  292945 settings.go:142] acquiring lock: {Name:mk9276b273f579f5d6fc4784e85dc48e5e91aadf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:49:51.524311  292945 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20363-12617/kubeconfig
	I0205 02:49:51.526233  292945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/kubeconfig: {Name:mk409188e78b16bca4bb55c54818efe1c75fa3a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:49:51.526545  292945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0205 02:49:51.526561  292945 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0205 02:49:51.526643  292945 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0205 02:49:51.526772  292945 addons.go:69] Setting storage-provisioner=true in profile "flannel-315000"
	I0205 02:49:51.526783  292945 config.go:182] Loaded profile config "flannel-315000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:49:51.526800  292945 addons.go:238] Setting addon storage-provisioner=true in "flannel-315000"
	I0205 02:49:51.526797  292945 addons.go:69] Setting default-storageclass=true in profile "flannel-315000"
	I0205 02:49:51.526828  292945 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-315000"
	I0205 02:49:51.526832  292945 host.go:66] Checking if "flannel-315000" exists ...
	I0205 02:49:51.527205  292945 cli_runner.go:164] Run: docker container inspect flannel-315000 --format={{.State.Status}}
	I0205 02:49:51.527451  292945 cli_runner.go:164] Run: docker container inspect flannel-315000 --format={{.State.Status}}
	I0205 02:49:51.528587  292945 out.go:177] * Verifying Kubernetes components...
	I0205 02:49:51.529876  292945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0205 02:49:51.553362  292945 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0205 02:49:51.554895  292945 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 02:49:51.554920  292945 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0205 02:49:51.554987  292945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-315000
	I0205 02:49:51.555982  292945 addons.go:238] Setting addon default-storageclass=true in "flannel-315000"
	I0205 02:49:51.556037  292945 host.go:66] Checking if "flannel-315000" exists ...
	I0205 02:49:51.556532  292945 cli_runner.go:164] Run: docker container inspect flannel-315000 --format={{.State.Status}}
	I0205 02:49:51.583566  292945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/flannel-315000/id_rsa Username:docker}
	I0205 02:49:51.595325  292945 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0205 02:49:51.595354  292945 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0205 02:49:51.595434  292945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-315000
	I0205 02:49:51.620398  292945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/flannel-315000/id_rsa Username:docker}
	I0205 02:49:51.750750  292945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0205 02:49:51.839497  292945 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0205 02:49:51.855930  292945 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0205 02:49:51.950063  292945 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0205 02:49:52.350888  292945 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0205 02:49:52.356147  292945 node_ready.go:35] waiting up to 15m0s for node "flannel-315000" to be "Ready" ...
	I0205 02:49:52.657210  292945 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0205 02:49:52.658493  292945 addons.go:514] duration metric: took 1.131855199s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0205 02:49:52.856421  292945 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-315000" context rescaled to 1 replicas
	I0205 02:49:54.359511  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:49:56.859135  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:49:58.859526  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:01.358849  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:03.359663  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:05.361356  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:07.860174  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:10.359325  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:12.859755  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:15.359021  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:17.359751  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:19.360072  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:21.859685  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:24.360186  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:26.859593  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:28.859669  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:31.360344  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:33.860244  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:36.359334  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:38.359765  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:40.364592  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:42.859242  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:44.859532  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:46.859897  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:49.360178  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:51.859543  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:53.860413  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:56.359496  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:50:58.861150  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:01.359149  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:03.359288  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:05.860003  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:08.359821  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:10.859844  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:12.860168  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:15.359176  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:17.359336  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:19.360222  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:21.863332  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:24.359357  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:26.360100  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:28.859478  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:30.860021  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:32.860583  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:35.359553  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:37.360013  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:39.360534  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:41.859470  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:43.860286  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:46.359304  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:48.359496  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:50.359667  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:52.859664  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:54.860316  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:57.359687  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:51:59.359788  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:01.360293  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:03.360346  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:05.859497  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:08.359642  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:10.359828  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:12.859370  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:14.859863  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:17.359114  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:19.359785  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:21.859291  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:23.859434  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:26.360107  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:28.859119  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:30.859906  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:33.359626  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:35.860104  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:38.359640  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:40.859776  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:43.359917  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:45.859832  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:48.359386  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:50.359921  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:52.859493  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:54.859924  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:57.359692  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:52:59.859411  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:01.859920  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:04.359620  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:06.859487  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:09.359340  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:11.859582  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:14.359265  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:16.359293  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:18.359868  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:20.859587  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:22.859751  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:24.859941  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:26.860318  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:29.359827  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:31.859098  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:33.859916  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:36.359815  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:38.359932  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:40.859904  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:43.359067  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:45.359572  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:47.859680  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:50.360209  292945 node_ready.go:53] node "flannel-315000" has status "Ready":"False"
	I0205 02:53:52.360113  292945 node_ready.go:38] duration metric: took 4m0.00392788s for node "flannel-315000" to be "Ready" ...
	I0205 02:53:52.362165  292945 out.go:201] 
	W0205 02:53:52.363581  292945 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0205 02:53:52.363604  292945 out.go:270] * 
	* 
	W0205 02:53:52.364408  292945 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0205 02:53:52.366440  292945 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (266.00s)
E0205 02:56:39.010798   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    

Test pass (291/324)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.35
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 6.68
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.2
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.07
21 TestBinaryMirror 0.75
22 TestOffline 57.26
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 118.13
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 7.44
35 TestAddons/parallel/Registry 14.25
37 TestAddons/parallel/InspektorGadget 11.61
38 TestAddons/parallel/MetricsServer 6.02
40 TestAddons/parallel/CSI 56.04
41 TestAddons/parallel/Headlamp 16.74
42 TestAddons/parallel/CloudSpanner 5.52
43 TestAddons/parallel/LocalPath 53.12
44 TestAddons/parallel/NvidiaDevicePlugin 5.5
45 TestAddons/parallel/Yakd 10.62
46 TestAddons/parallel/AmdGpuDevicePlugin 6.45
47 TestAddons/StoppedEnableDisable 12.06
48 TestCertOptions 30.04
49 TestCertExpiration 233.64
51 TestForceSystemdFlag 35.34
52 TestForceSystemdEnv 37.46
54 TestKVMDriverInstallOrUpdate 4.38
58 TestErrorSpam/setup 22.92
59 TestErrorSpam/start 0.59
60 TestErrorSpam/status 0.86
61 TestErrorSpam/pause 1.55
62 TestErrorSpam/unpause 1.57
63 TestErrorSpam/stop 1.36
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 40.08
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 22.03
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
75 TestFunctional/serial/CacheCmd/cache/add_local 1.31
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 31.37
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.32
86 TestFunctional/serial/LogsFileCmd 1.34
87 TestFunctional/serial/InvalidService 4.43
89 TestFunctional/parallel/ConfigCmd 0.37
90 TestFunctional/parallel/DashboardCmd 143.33
91 TestFunctional/parallel/DryRun 0.33
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.97
97 TestFunctional/parallel/ServiceCmdConnect 65.47
98 TestFunctional/parallel/AddonsCmd 0.13
101 TestFunctional/parallel/SSHCmd 0.59
102 TestFunctional/parallel/CpCmd 1.75
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.69
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
113 TestFunctional/parallel/License 0.2
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.44
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
120 TestFunctional/parallel/ImageCommands/ImageBuild 1.93
121 TestFunctional/parallel/ImageCommands/Setup 1
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.37
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.23
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
137 TestFunctional/parallel/ServiceCmd/DeployApp 8.15
138 TestFunctional/parallel/ServiceCmd/List 0.88
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.88
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
141 TestFunctional/parallel/ServiceCmd/Format 0.51
142 TestFunctional/parallel/ServiceCmd/URL 0.51
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
144 TestFunctional/parallel/ProfileCmd/profile_list 0.37
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
146 TestFunctional/parallel/MountCmd/any-port 90.64
147 TestFunctional/parallel/MountCmd/specific-port 1.76
148 TestFunctional/parallel/MountCmd/VerifyCleanup 1.43
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 100.74
162 TestMultiControlPlane/serial/DeployApp 4
163 TestMultiControlPlane/serial/PingHostFromPods 1.01
164 TestMultiControlPlane/serial/AddWorkerNode 33.02
165 TestMultiControlPlane/serial/NodeLabels 0.06
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.82
167 TestMultiControlPlane/serial/CopyFile 15.61
168 TestMultiControlPlane/serial/StopSecondaryNode 12.46
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
170 TestMultiControlPlane/serial/RestartSecondaryNode 20.23
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.83
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 177.51
173 TestMultiControlPlane/serial/DeleteSecondaryNode 11.69
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
175 TestMultiControlPlane/serial/StopCluster 35.44
176 TestMultiControlPlane/serial/RestartCluster 87.43
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
178 TestMultiControlPlane/serial/AddSecondaryNode 40.12
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
183 TestJSONOutput/start/Command 43.8
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.67
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.58
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.69
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
208 TestKicCustomNetwork/create_custom_network 27.65
209 TestKicCustomNetwork/use_default_bridge_network 22.6
210 TestKicExistingNetwork 22.7
211 TestKicCustomSubnet 23.87
212 TestKicStaticIP 23.4
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 46.75
217 TestMountStart/serial/StartWithMountFirst 5.37
218 TestMountStart/serial/VerifyMountFirst 0.25
219 TestMountStart/serial/StartWithMountSecond 5.34
220 TestMountStart/serial/VerifyMountSecond 0.24
221 TestMountStart/serial/DeleteFirst 1.58
222 TestMountStart/serial/VerifyMountPostDelete 0.24
223 TestMountStart/serial/Stop 1.17
224 TestMountStart/serial/RestartStopped 7.13
225 TestMountStart/serial/VerifyMountPostStop 0.25
228 TestMultiNode/serial/FreshStart2Nodes 67.28
229 TestMultiNode/serial/DeployApp2Nodes 3.39
230 TestMultiNode/serial/PingHostFrom2Pods 0.71
231 TestMultiNode/serial/AddNode 31.11
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.6
234 TestMultiNode/serial/CopyFile 8.91
235 TestMultiNode/serial/StopNode 2.08
236 TestMultiNode/serial/StartAfterStop 8.97
237 TestMultiNode/serial/RestartKeepsNodes 85.62
238 TestMultiNode/serial/DeleteNode 4.94
239 TestMultiNode/serial/StopMultiNode 23.72
240 TestMultiNode/serial/RestartMultiNode 49.61
241 TestMultiNode/serial/ValidateNameConflict 23
246 TestPreload 103.47
248 TestScheduledStopUnix 100.38
251 TestInsufficientStorage 10.11
252 TestRunningBinaryUpgrade 53.91
254 TestKubernetesUpgrade 354.21
255 TestMissingContainerUpgrade 115.55
257 TestPause/serial/Start 54.61
258 TestPause/serial/SecondStartNoReconfiguration 20.46
259 TestStoppedBinaryUpgrade/Setup 0.5
260 TestStoppedBinaryUpgrade/Upgrade 93.18
261 TestPause/serial/Pause 0.69
262 TestPause/serial/VerifyStatus 0.31
263 TestPause/serial/Unpause 0.65
264 TestPause/serial/PauseAgain 0.86
265 TestPause/serial/DeletePaused 3.44
266 TestPause/serial/VerifyDeletedResources 0.72
267 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
275 TestNetworkPlugins/group/false 4.83
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
281 TestNoKubernetes/serial/StartWithK8s 31.78
282 TestNoKubernetes/serial/StartWithStopK8s 5.95
283 TestNoKubernetes/serial/Start 4.88
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
285 TestNoKubernetes/serial/ProfileList 25.35
293 TestNetworkPlugins/group/auto/Start 45.93
294 TestNoKubernetes/serial/Stop 1.42
295 TestNoKubernetes/serial/StartNoArgs 7.15
296 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
297 TestNetworkPlugins/group/kindnet/Start 42.94
298 TestNetworkPlugins/group/calico/Start 59.88
299 TestNetworkPlugins/group/auto/KubeletFlags 0.32
300 TestNetworkPlugins/group/auto/NetCatPod 9.26
301 TestNetworkPlugins/group/auto/DNS 0.16
302 TestNetworkPlugins/group/auto/Localhost 0.15
303 TestNetworkPlugins/group/auto/HairPin 0.15
304 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
305 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
306 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
307 TestNetworkPlugins/group/custom-flannel/Start 49.47
308 TestNetworkPlugins/group/kindnet/DNS 0.14
309 TestNetworkPlugins/group/kindnet/Localhost 0.11
310 TestNetworkPlugins/group/kindnet/HairPin 0.11
311 TestNetworkPlugins/group/calico/ControllerPod 6.01
312 TestNetworkPlugins/group/calico/KubeletFlags 0.32
313 TestNetworkPlugins/group/calico/NetCatPod 10.26
314 TestNetworkPlugins/group/calico/DNS 0.15
315 TestNetworkPlugins/group/calico/Localhost 0.13
316 TestNetworkPlugins/group/calico/HairPin 0.13
317 TestNetworkPlugins/group/enable-default-cni/Start 68.9
319 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
320 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.23
321 TestNetworkPlugins/group/custom-flannel/DNS 0.15
322 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
323 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
324 TestNetworkPlugins/group/bridge/Start 33.51
325 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
326 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.2
327 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
328 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
329 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
330 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
331 TestNetworkPlugins/group/bridge/NetCatPod 12.21
333 TestStartStop/group/old-k8s-version/serial/FirstStart 127.54
334 TestNetworkPlugins/group/bridge/DNS 20.95
336 TestStartStop/group/no-preload/serial/FirstStart 59.45
337 TestNetworkPlugins/group/bridge/Localhost 0.13
338 TestNetworkPlugins/group/bridge/HairPin 0.11
340 TestStartStop/group/embed-certs/serial/FirstStart 45.25
341 TestStartStop/group/no-preload/serial/DeployApp 8.27
342 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
343 TestStartStop/group/no-preload/serial/Stop 11.93
344 TestStartStop/group/embed-certs/serial/DeployApp 8.25
345 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
346 TestStartStop/group/no-preload/serial/SecondStart 273.81
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
348 TestStartStop/group/embed-certs/serial/Stop 12.4
349 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
350 TestStartStop/group/embed-certs/serial/SecondStart 262.35
351 TestStartStop/group/old-k8s-version/serial/DeployApp 9.39
352 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.86
353 TestStartStop/group/old-k8s-version/serial/Stop 11.94
354 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
355 TestStartStop/group/old-k8s-version/serial/SecondStart 132.08
357 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.16
358 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
360 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.88
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.42
363 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
365 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
366 TestStartStop/group/old-k8s-version/serial/Pause 2.75
368 TestStartStop/group/newest-cni/serial/FirstStart 26.86
369 TestStartStop/group/newest-cni/serial/DeployApp 0
370 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.25
371 TestStartStop/group/newest-cni/serial/Stop 1.23
372 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
373 TestStartStop/group/newest-cni/serial/SecondStart 13.22
374 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
376 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
377 TestStartStop/group/newest-cni/serial/Pause 3.17
378 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
380 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
381 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
382 TestStartStop/group/no-preload/serial/Pause 2.79
383 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
384 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
385 TestStartStop/group/embed-certs/serial/Pause 2.74
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
387 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
388 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
389 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.7
x
+
TestDownloadOnly/v1.20.0/json-events (5.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-937448 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-937448 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.351943149s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0205 02:03:28.492253   19390 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0205 02:03:28.492359   19390 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-937448
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-937448: exit status 85 (61.831942ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-937448 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC |          |
	|         | -p download-only-937448        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 02:03:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 02:03:23.180092   19402 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:03:23.180199   19402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:03:23.180211   19402 out.go:358] Setting ErrFile to fd 2...
	I0205 02:03:23.180218   19402 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:03:23.180395   19402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
	W0205 02:03:23.180506   19402 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20363-12617/.minikube/config/config.json: open /home/jenkins/minikube-integration/20363-12617/.minikube/config/config.json: no such file or directory
	I0205 02:03:23.181085   19402 out.go:352] Setting JSON to true
	I0205 02:03:23.182073   19402 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2749,"bootTime":1738718254,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:03:23.182165   19402 start.go:139] virtualization: kvm guest
	I0205 02:03:23.184681   19402 out.go:97] [download-only-937448] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0205 02:03:23.184816   19402 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball: no such file or directory
	I0205 02:03:23.184857   19402 notify.go:220] Checking for updates...
	I0205 02:03:23.186286   19402 out.go:169] MINIKUBE_LOCATION=20363
	I0205 02:03:23.187604   19402 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:03:23.188928   19402 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	I0205 02:03:23.190168   19402 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	I0205 02:03:23.191363   19402 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0205 02:03:23.193784   19402 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0205 02:03:23.193998   19402 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:03:23.214919   19402 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0205 02:03:23.214984   19402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:03:23.568871   19402 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-05 02:03:23.560358422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:03:23.568973   19402 docker.go:318] overlay module found
	I0205 02:03:23.570700   19402 out.go:97] Using the docker driver based on user configuration
	I0205 02:03:23.570719   19402 start.go:297] selected driver: docker
	I0205 02:03:23.570724   19402 start.go:901] validating driver "docker" against <nil>
	I0205 02:03:23.570798   19402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:03:23.615062   19402 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-05 02:03:23.607173969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:03:23.615219   19402 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0205 02:03:23.615714   19402 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0205 02:03:23.615881   19402 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0205 02:03:23.617822   19402 out.go:169] Using Docker driver with root privileges
	I0205 02:03:23.619093   19402 cni.go:84] Creating CNI manager for ""
	I0205 02:03:23.619147   19402 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0205 02:03:23.619158   19402 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0205 02:03:23.619218   19402 start.go:340] cluster config:
	{Name:download-only-937448 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-937448 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:03:23.620599   19402 out.go:97] Starting "download-only-937448" primary control-plane node in "download-only-937448" cluster
	I0205 02:03:23.620613   19402 cache.go:121] Beginning downloading kic base image for docker with crio
	I0205 02:03:23.621965   19402 out.go:97] Pulling base image v0.0.46 ...
	I0205 02:03:23.621987   19402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0205 02:03:23.622083   19402 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0205 02:03:23.637433   19402 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0205 02:03:23.637616   19402 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0205 02:03:23.637713   19402 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0205 02:03:23.661245   19402 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0205 02:03:23.661276   19402 cache.go:56] Caching tarball of preloaded images
	I0205 02:03:23.661442   19402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0205 02:03:23.663583   19402 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0205 02:03:23.663625   19402 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0205 02:03:23.694040   19402 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0205 02:03:26.656371   19402 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0205 02:03:26.656447   19402 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0205 02:03:27.393597   19402 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0205 02:03:27.624208   19402 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0205 02:03:27.624540   19402 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/download-only-937448/config.json ...
	I0205 02:03:27.624570   19402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/download-only-937448/config.json: {Name:mke1a6e79751b2b4b5c2abc6ee0a69a93ea4ac4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:03:27.624716   19402 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0205 02:03:27.624877   19402 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20363-12617/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-937448 host does not exist
	  To start a cluster, run: "minikube start -p download-only-937448"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-937448
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (6.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-931461 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-931461 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.680219925s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (6.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0205 02:03:35.578633   19390 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0205 02:03:35.578686   19390 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-931461
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-931461: exit status 85 (63.463769ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-937448 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC |                     |
	|         | -p download-only-937448        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC | 05 Feb 25 02:03 UTC |
	| delete  | -p download-only-937448        | download-only-937448 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC | 05 Feb 25 02:03 UTC |
	| start   | -o=json --download-only        | download-only-931461 | jenkins | v1.35.0 | 05 Feb 25 02:03 UTC |                     |
	|         | -p download-only-931461        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/05 02:03:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0205 02:03:28.938903   19752 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:03:28.938997   19752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:03:28.939005   19752 out.go:358] Setting ErrFile to fd 2...
	I0205 02:03:28.939009   19752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:03:28.939165   19752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
	I0205 02:03:28.939710   19752 out.go:352] Setting JSON to true
	I0205 02:03:28.940515   19752 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2755,"bootTime":1738718254,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:03:28.940604   19752 start.go:139] virtualization: kvm guest
	I0205 02:03:28.943001   19752 out.go:97] [download-only-931461] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 02:03:28.943136   19752 notify.go:220] Checking for updates...
	I0205 02:03:28.944576   19752 out.go:169] MINIKUBE_LOCATION=20363
	I0205 02:03:28.946317   19752 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:03:28.947613   19752 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	I0205 02:03:28.948833   19752 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	I0205 02:03:28.950158   19752 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0205 02:03:28.952734   19752 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0205 02:03:28.952947   19752 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:03:28.975445   19752 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0205 02:03:28.975511   19752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:03:29.024405   19752 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-02-05 02:03:29.014363816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:03:29.024514   19752 docker.go:318] overlay module found
	I0205 02:03:29.026295   19752 out.go:97] Using the docker driver based on user configuration
	I0205 02:03:29.026316   19752 start.go:297] selected driver: docker
	I0205 02:03:29.026324   19752 start.go:901] validating driver "docker" against <nil>
	I0205 02:03:29.026401   19752 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:03:29.072693   19752 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-02-05 02:03:29.064219162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:03:29.072854   19752 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0205 02:03:29.073324   19752 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0205 02:03:29.073452   19752 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0205 02:03:29.075279   19752 out.go:169] Using Docker driver with root privileges
	I0205 02:03:29.076525   19752 cni.go:84] Creating CNI manager for ""
	I0205 02:03:29.076581   19752 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0205 02:03:29.076597   19752 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0205 02:03:29.076659   19752 start.go:340] cluster config:
	{Name:download-only-931461 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-931461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:03:29.078145   19752 out.go:97] Starting "download-only-931461" primary control-plane node in "download-only-931461" cluster
	I0205 02:03:29.078161   19752 cache.go:121] Beginning downloading kic base image for docker with crio
	I0205 02:03:29.079483   19752 out.go:97] Pulling base image v0.0.46 ...
	I0205 02:03:29.079508   19752 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 02:03:29.079621   19752 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0205 02:03:29.095615   19752 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0205 02:03:29.095731   19752 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0205 02:03:29.095749   19752 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0205 02:03:29.095753   19752 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0205 02:03:29.095760   19752 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0205 02:03:29.106878   19752 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 02:03:29.106909   19752 cache.go:56] Caching tarball of preloaded images
	I0205 02:03:29.107065   19752 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 02:03:29.109106   19752 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0205 02:03:29.109134   19752 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0205 02:03:29.146888   19752 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2af56a340efcc3949401b47b9a5d537 -> /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0205 02:03:33.028535   19752 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0205 02:03:33.028636   19752 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20363-12617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0205 02:03:33.786695   19752 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0205 02:03:33.787030   19752 profile.go:143] Saving config to /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/download-only-931461/config.json ...
	I0205 02:03:33.787059   19752 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/download-only-931461/config.json: {Name:mk88413c40d42742dde80600a6f1e138dbc2a410 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0205 02:03:33.787215   19752 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0205 02:03:33.787340   19752 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20363-12617/.minikube/cache/linux/amd64/v1.32.1/kubectl
	
	
	* The control-plane node download-only-931461 host does not exist
	  To start a cluster, run: "minikube start -p download-only-931461"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-931461
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.07s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-777609 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-777609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-777609
--- PASS: TestDownloadOnlyKic (1.07s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
I0205 02:03:37.285762   19390 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-310251 --alsologtostderr --binary-mirror http://127.0.0.1:37147 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-310251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-310251
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestOffline (57.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-414703 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-414703 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (54.839227743s)
helpers_test.go:175: Cleaning up "offline-crio-414703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-414703
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-414703: (2.421769983s)
--- PASS: TestOffline (57.26s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-217306
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-217306: exit status 85 (54.35875ms)

                                                
                                                
-- stdout --
	* Profile "addons-217306" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-217306"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-217306
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-217306: exit status 85 (55.611703ms)

                                                
                                                
-- stdout --
	* Profile "addons-217306" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-217306"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (118.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-217306 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-217306 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m58.131951488s)
--- PASS: TestAddons/Setup (118.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-217306 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-217306 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-217306 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-217306 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5826b5f0-283f-4605-88bb-5c6ffacee344] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5826b5f0-283f-4605-88bb-5c6ffacee344] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003841495s
addons_test.go:633: (dbg) Run:  kubectl --context addons-217306 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-217306 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-217306 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.766323ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-fvmx4" [fd7a8710-7eef-4adb-80ed-b30907f7c30f] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002039879s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nns4j" [ed804182-6e80-4df2-a1d1-e7cf7eb658ec] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004079598s
addons_test.go:331: (dbg) Run:  kubectl --context addons-217306 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-217306 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-217306 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.51223231s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 ip
2025/02/05 02:06:06 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.25s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.61s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-r7wrr" [95c57eb3-2f94-415d-ab4a-0277f31dd852] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003422355s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-217306 addons disable inspektor-gadget --alsologtostderr -v=1: (5.606637802s)
--- PASS: TestAddons/parallel/InspektorGadget (11.61s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.686096ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-9jb6h" [8743c040-e408-4ca1-8b1b-ecd90bb14894] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003471026s
addons_test.go:402: (dbg) Run:  kubectl --context addons-217306 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.02s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.04s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0205 02:06:06.594144   19390 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0205 02:06:06.627445   19390 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0205 02:06:06.627471   19390 kapi.go:107] duration metric: took 33.338991ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 33.349043ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-217306 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-217306 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [df8a0793-8f23-4b50-b049-3eea37b88490] Pending
helpers_test.go:344: "task-pv-pod" [df8a0793-8f23-4b50-b049-3eea37b88490] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [df8a0793-8f23-4b50-b049-3eea37b88490] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003254262s
addons_test.go:511: (dbg) Run:  kubectl --context addons-217306 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-217306 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-217306 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-217306 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-217306 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-217306 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-217306 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [13d75a86-2183-4256-a0f2-f2409fd343dd] Pending
helpers_test.go:344: "task-pv-pod-restore" [13d75a86-2183-4256-a0f2-f2409fd343dd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [13d75a86-2183-4256-a0f2-f2409fd343dd] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003551956s
addons_test.go:553: (dbg) Run:  kubectl --context addons-217306 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-217306 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-217306 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-217306 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.518069058s)
--- PASS: TestAddons/parallel/CSI (56.04s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-217306 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-lpbkf" [c890edf0-a98f-4271-ad78-26b4e34d60ff] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-lpbkf" [c890edf0-a98f-4271-ad78-26b4e34d60ff] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003865734s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-217306 addons disable headlamp --alsologtostderr -v=1: (6.000498397s)
--- PASS: TestAddons/parallel/Headlamp (16.74s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-r6f85" [9b74b88f-28d2-4aaa-bf40-f68f87db407a] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002726978s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-217306 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-217306 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-217306 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [15eb18c8-77d4-4e8b-ab6f-3260cfcdc48c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [15eb18c8-77d4-4e8b-ab6f-3260cfcdc48c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [15eb18c8-77d4-4e8b-ab6f-3260cfcdc48c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002611927s
addons_test.go:906: (dbg) Run:  kubectl --context addons-217306 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 ssh "cat /opt/local-path-provisioner/pvc-4029b30c-10c6-440c-9aa0-78582bd94f12_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-217306 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-217306 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-217306 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.286417158s)
--- PASS: TestAddons/parallel/LocalPath (53.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pkrw7" [4d3cffa4-42a6-427e-9dd1-335e4dc0455f] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.002809428s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-rld2s" [27d9f222-87ff-4675-8335-52d1e11a1aea] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003524278s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-217306 addons disable yakd --alsologtostderr -v=1: (5.618402032s)
--- PASS: TestAddons/parallel/Yakd (10.62s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-576th" [8933c1a8-42a0-45c1-ae49-7f09ed352541] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003239946s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-217306
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-217306: (11.807280142s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-217306
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-217306
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-217306
--- PASS: TestAddons/StoppedEnableDisable (12.06s)

                                                
                                    
x
+
TestCertOptions (30.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-536717 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-536717 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (27.271816136s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-536717 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-536717 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-536717 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-536717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-536717
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-536717: (2.069630283s)
--- PASS: TestCertOptions (30.04s)

                                                
                                    
x
+
TestCertExpiration (233.64s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-472446 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-472446 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (36.107407637s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-472446 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-472446 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (15.172145587s)
helpers_test.go:175: Cleaning up "cert-expiration-472446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-472446
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-472446: (2.363524289s)
--- PASS: TestCertExpiration (233.64s)

                                                
                                    
x
+
TestForceSystemdFlag (35.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-791357 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-791357 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.110957127s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-791357 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-791357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-791357
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-791357: (6.915439855s)
--- PASS: TestForceSystemdFlag (35.34s)

                                                
                                    
x
+
TestForceSystemdEnv (37.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-447051 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-447051 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.930065659s)
helpers_test.go:175: Cleaning up "force-systemd-env-447051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-447051
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-447051: (2.532669077s)
--- PASS: TestForceSystemdEnv (37.46s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.38s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0205 02:46:49.318179   19390 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0205 02:46:49.318347   19390 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0205 02:46:49.348297   19390 install.go:62] docker-machine-driver-kvm2: exit status 1
W0205 02:46:49.348659   19390 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0205 02:46:49.348730   19390 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate488400781/001/docker-machine-driver-kvm2
I0205 02:46:49.617999   19390 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate488400781/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0] Decompressors:map[bz2:0xc0001215d8 gz:0xc000121660 tar:0xc000121610 tar.bz2:0xc000121620 tar.gz:0xc000121630 tar.xz:0xc000121640 tar.zst:0xc000121650 tbz2:0xc000121620 tgz:0xc000121630 txz:0xc000121640 tzst:0xc000121650 xz:0xc000121668 zip:0xc000121690 zst:0xc0001216a0] Getters:map[file:0xc001d063d0 http:0xc0008001e0 https:0xc000800230] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0205 02:46:49.618052   19390 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate488400781/001/docker-machine-driver-kvm2
I0205 02:46:51.269609   19390 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0205 02:46:51.269704   19390 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0205 02:46:51.304344   19390 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0205 02:46:51.304385   19390 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0205 02:46:51.304462   19390 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0205 02:46:51.304498   19390 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate488400781/002/docker-machine-driver-kvm2
I0205 02:46:51.468906   19390 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate488400781/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0] Decompressors:map[bz2:0xc0001215d8 gz:0xc000121660 tar:0xc000121610 tar.bz2:0xc000121620 tar.gz:0xc000121630 tar.xz:0xc000121640 tar.zst:0xc000121650 tbz2:0xc000121620 tgz:0xc000121630 txz:0xc000121640 tzst:0xc000121650 xz:0xc000121668 zip:0xc000121690 zst:0xc0001216a0] Getters:map[file:0xc00062b070 http:0xc00057e870 https:0xc00057e8c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0205 02:46:51.468959   19390 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate488400781/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.38s)

                                                
                                    
x
+
TestErrorSpam/setup (22.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-075895 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-075895 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-075895 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-075895 --driver=docker  --container-runtime=crio: (22.923452772s)
--- PASS: TestErrorSpam/setup (22.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 start --dry-run
--- PASS: TestErrorSpam/start (0.59s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 unpause
--- PASS: TestErrorSpam/unpause (1.57s)

                                                
                                    
x
+
TestErrorSpam/stop (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 stop: (1.175735852s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-075895 --log_dir /tmp/nospam-075895 stop
--- PASS: TestErrorSpam/stop (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20363-12617/.minikube/files/etc/test/nested/copy/19390/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.08s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-150463 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-150463 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (40.082827689s)
--- PASS: TestFunctional/serial/StartWithProxy (40.08s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (22.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0205 02:10:12.907738   19390 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-150463 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-150463 --alsologtostderr -v=8: (22.03075651s)
functional_test.go:680: soft start took 22.032038765s for "functional-150463" cluster.
I0205 02:10:34.938986   19390 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (22.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-150463 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-150463 cache add registry.k8s.io/pause:3.1: (1.078354697s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 cache add registry.k8s.io/pause:3.3
E0205 02:10:36.830567   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:10:36.836980   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:10:36.848359   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:10:36.869811   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:10:36.911234   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:10:36.992693   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:10:37.154245   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-150463 cache add registry.k8s.io/pause:3.3: (1.115718349s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 cache add registry.k8s.io/pause:latest
E0205 02:10:37.475743   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:10:38.117838   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-150463 cache add registry.k8s.io/pause:latest: (1.127461872s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-150463 /tmp/TestFunctionalserialCacheCmdcacheadd_local1825381747/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 cache add minikube-local-cache-test:functional-150463
E0205 02:10:39.400191   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 cache delete minikube-local-cache-test:functional-150463
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-150463
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-150463 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (268.389672ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 kubectl -- --context functional-150463 get pods
E0205 02:10:41.962036   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-150463 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-150463 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0205 02:10:47.083590   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:10:57.325115   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-150463 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.374403917s)
functional_test.go:778: restart took 31.374525394s for "functional-150463" cluster.
I0205 02:11:13.474131   19390 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (31.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-150463 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-150463 logs: (1.319108335s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 logs --file /tmp/TestFunctionalserialLogsFileCmd3510696950/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-150463 logs --file /tmp/TestFunctionalserialLogsFileCmd3510696950/001/logs.txt: (1.33902948s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-150463 apply -f testdata/invalidsvc.yaml
E0205 02:11:17.807406   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-150463
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-150463: exit status 115 (314.978522ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31697 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-150463 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-150463 config get cpus: exit status 14 (62.370833ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-150463 config get cpus: exit status 14 (60.697193ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (143.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-150463 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-150463 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 60651: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (143.33s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-150463 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-150463 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (144.859937ms)

                                                
                                                
-- stdout --
	* [functional-150463] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 02:14:19.979501   60291 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:14:19.979872   60291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:14:19.979883   60291 out.go:358] Setting ErrFile to fd 2...
	I0205 02:14:19.979888   60291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:14:19.980108   60291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
	I0205 02:14:19.980625   60291 out.go:352] Setting JSON to false
	I0205 02:14:19.981509   60291 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3406,"bootTime":1738718254,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:14:19.981635   60291 start.go:139] virtualization: kvm guest
	I0205 02:14:19.983581   60291 out.go:177] * [functional-150463] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 02:14:19.984836   60291 notify.go:220] Checking for updates...
	I0205 02:14:19.984907   60291 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 02:14:19.986483   60291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:14:19.987837   60291 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	I0205 02:14:19.989092   60291 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	I0205 02:14:19.990610   60291 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 02:14:19.991945   60291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 02:14:19.993678   60291 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:14:19.994150   60291 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:14:20.017367   60291 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0205 02:14:20.017463   60291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:14:20.065180   60291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-05 02:14:20.056741433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:14:20.065269   60291 docker.go:318] overlay module found
	I0205 02:14:20.067031   60291 out.go:177] * Using the docker driver based on existing profile
	I0205 02:14:20.068320   60291 start.go:297] selected driver: docker
	I0205 02:14:20.068332   60291 start.go:901] validating driver "docker" against &{Name:functional-150463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-150463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:14:20.068432   60291 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 02:14:20.070374   60291 out.go:201] 
	W0205 02:14:20.071493   60291 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0205 02:14:20.072633   60291 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-150463 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-150463 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-150463 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (147.008717ms)

                                                
                                                
-- stdout --
	* [functional-150463] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 02:14:19.828288   60216 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:14:19.828430   60216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:14:19.828440   60216 out.go:358] Setting ErrFile to fd 2...
	I0205 02:14:19.828447   60216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:14:19.828741   60216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
	I0205 02:14:19.829279   60216 out.go:352] Setting JSON to false
	I0205 02:14:19.830291   60216 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3406,"bootTime":1738718254,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:14:19.830395   60216 start.go:139] virtualization: kvm guest
	I0205 02:14:19.832910   60216 out.go:177] * [functional-150463] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0205 02:14:19.834319   60216 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 02:14:19.834349   60216 notify.go:220] Checking for updates...
	I0205 02:14:19.837481   60216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:14:19.838846   60216 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	I0205 02:14:19.840317   60216 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	I0205 02:14:19.841663   60216 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 02:14:19.842944   60216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 02:14:19.844522   60216 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:14:19.844968   60216 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:14:19.867655   60216 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0205 02:14:19.867788   60216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:14:19.918157   60216 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-05 02:14:19.9094801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:14:19.918261   60216 docker.go:318] overlay module found
	I0205 02:14:19.920100   60216 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0205 02:14:19.921326   60216 start.go:297] selected driver: docker
	I0205 02:14:19.921343   60216 start.go:901] validating driver "docker" against &{Name:functional-150463 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-150463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0205 02:14:19.921457   60216 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 02:14:19.924031   60216 out.go:201] 
	W0205 02:14:19.925473   60216 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0205 02:14:19.926857   60216 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (65.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-150463 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-150463 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-9vzwq" [45b7b750-ea9e-4612-9977-c2ad908e44c1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-9vzwq" [45b7b750-ea9e-4612-9977-c2ad908e44c1] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 1m5.003242502s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:32044
functional_test.go:1692: http://192.168.49.2:32044: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-9vzwq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32044
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (65.47s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh -n functional-150463 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 cp functional-150463:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1605950351/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh -n functional-150463 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh -n functional-150463 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/19390/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "sudo cat /etc/test/nested/copy/19390/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/19390.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "sudo cat /etc/ssl/certs/19390.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/19390.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "sudo cat /usr/share/ca-certificates/19390.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/193902.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "sudo cat /etc/ssl/certs/193902.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/193902.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "sudo cat /usr/share/ca-certificates/193902.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-150463 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-150463 ssh "sudo systemctl is-active docker": exit status 1 (288.638549ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-150463 ssh "sudo systemctl is-active containerd": exit status 1 (283.616948ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-150463 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-150463
localhost/kicbase/echo-server:functional-150463
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20241212-9f82dd49
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-150463 image ls --format short --alsologtostderr:
I0205 02:14:32.120560   61954 out.go:345] Setting OutFile to fd 1 ...
I0205 02:14:32.120678   61954 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:32.120688   61954 out.go:358] Setting ErrFile to fd 2...
I0205 02:14:32.120693   61954 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:32.120884   61954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
I0205 02:14:32.121500   61954 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:32.121622   61954 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:32.122005   61954 cli_runner.go:164] Run: docker container inspect functional-150463 --format={{.State.Status}}
I0205 02:14:32.139310   61954 ssh_runner.go:195] Run: systemctl --version
I0205 02:14:32.139366   61954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-150463
I0205 02:14:32.155880   61954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/functional-150463/id_rsa Username:docker}
I0205 02:14:32.241747   61954 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-150463 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-150463  | a5b38046b23b8 | 3.33kB |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| localhost/kicbase/echo-server           | functional-150463  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/my-image                      | functional-150463  | 6b319493c1544 | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-150463 image ls --format table --alsologtostderr:
I0205 02:14:34.676664   62533 out.go:345] Setting OutFile to fd 1 ...
I0205 02:14:34.676843   62533 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:34.676857   62533 out.go:358] Setting ErrFile to fd 2...
I0205 02:14:34.676864   62533 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:34.677338   62533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
I0205 02:14:34.678017   62533 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:34.678111   62533 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:34.678475   62533 cli_runner.go:164] Run: docker container inspect functional-150463 --format={{.State.Status}}
I0205 02:14:34.696498   62533 ssh_runner.go:195] Run: systemctl --version
I0205 02:14:34.696543   62533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-150463
I0205 02:14:34.714856   62533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/functional-150463/id_rsa Username:docker}
I0205 02:14:34.801822   62533 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-150463 image ls --format json --alsologtostderr:
[{"id":"88aab42bc773b0ece85856f6aadfa7ab7debf1be464a8554536b4d3419d953ef","repoDigests":["docker.io/library/741ba7832109850ba3e40849004ea178f11052871c149c1a7b22f5932e0ada88-tmp@sha256:1f4b16a68a89ee2494731c6a95ea77cb0f610a3d3a5b285a0404834451aaa13b"],"repoTags":[],"size":"1465611"},{"id":"6b319493c15445643925a4b62b30d66597f5399ea36d166bdda366c908a6e4f6","repoDigests":["localhost/my-image@sha256:4e71c721f84cbcef95e95133c5028718adc9efde77a5e09424a0a6b3c11c8e75"],"repoTags":["localhost/my-image:functional-150463"],"size":"1468194"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a5b38046b23b863763149c4dd87fa1485c7374e4c11a9abdb4d826b65a1d02ba","repoDigests":
["localhost/minikube-local-cache-test@sha256:a0668169e6157fdf37b13ffc9afc280adf65b7284729d96711deb3b012e2afa6"],"repoTags":["localhost/minikube-local-cache-test:functional-150463"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDig
ests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:lates
t"],"size":"1462480"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-150463"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8
fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.i
o/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b
5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-150463 image ls --format json --alsologtostderr:
I0205 02:14:34.465639   62479 out.go:345] Setting OutFile to fd 1 ...
I0205 02:14:34.465776   62479 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:34.465786   62479 out.go:358] Setting ErrFile to fd 2...
I0205 02:14:34.465790   62479 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:34.465975   62479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
I0205 02:14:34.466609   62479 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:34.466732   62479 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:34.467127   62479 cli_runner.go:164] Run: docker container inspect functional-150463 --format={{.State.Status}}
I0205 02:14:34.484652   62479 ssh_runner.go:195] Run: systemctl --version
I0205 02:14:34.484704   62479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-150463
I0205 02:14:34.503411   62479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/functional-150463/id_rsa Username:docker}
I0205 02:14:34.593790   62479 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-150463 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a5b38046b23b863763149c4dd87fa1485c7374e4c11a9abdb4d826b65a1d02ba
repoDigests:
- localhost/minikube-local-cache-test@sha256:a0668169e6157fdf37b13ffc9afc280adf65b7284729d96711deb3b012e2afa6
repoTags:
- localhost/minikube-local-cache-test:functional-150463
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-150463
size: "4943877"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-150463 image ls --format yaml --alsologtostderr:
I0205 02:14:32.323127   62003 out.go:345] Setting OutFile to fd 1 ...
I0205 02:14:32.323384   62003 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:32.323395   62003 out.go:358] Setting ErrFile to fd 2...
I0205 02:14:32.323399   62003 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:32.323626   62003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
I0205 02:14:32.324233   62003 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:32.324347   62003 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:32.324754   62003 cli_runner.go:164] Run: docker container inspect functional-150463 --format={{.State.Status}}
I0205 02:14:32.342223   62003 ssh_runner.go:195] Run: systemctl --version
I0205 02:14:32.342286   62003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-150463
I0205 02:14:32.359824   62003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/functional-150463/id_rsa Username:docker}
I0205 02:14:32.450019   62003 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-150463 ssh pgrep buildkitd: exit status 1 (236.854547ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image build -t localhost/my-image:functional-150463 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-150463 image build -t localhost/my-image:functional-150463 testdata/build --alsologtostderr: (1.485728802s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-150463 image build -t localhost/my-image:functional-150463 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 88aab42bc77
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-150463
--> 6b319493c15
Successfully tagged localhost/my-image:functional-150463
6b319493c15445643925a4b62b30d66597f5399ea36d166bdda366c908a6e4f6
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-150463 image build -t localhost/my-image:functional-150463 testdata/build --alsologtostderr:
I0205 02:14:32.769688   62151 out.go:345] Setting OutFile to fd 1 ...
I0205 02:14:32.769856   62151 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:32.769865   62151 out.go:358] Setting ErrFile to fd 2...
I0205 02:14:32.769869   62151 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0205 02:14:32.770051   62151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
I0205 02:14:32.770604   62151 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:32.771119   62151 config.go:182] Loaded profile config "functional-150463": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0205 02:14:32.771519   62151 cli_runner.go:164] Run: docker container inspect functional-150463 --format={{.State.Status}}
I0205 02:14:32.788864   62151 ssh_runner.go:195] Run: systemctl --version
I0205 02:14:32.788912   62151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-150463
I0205 02:14:32.805357   62151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/functional-150463/id_rsa Username:docker}
I0205 02:14:32.893882   62151 build_images.go:161] Building image from path: /tmp/build.162938380.tar
I0205 02:14:32.893948   62151 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0205 02:14:32.901990   62151 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.162938380.tar
I0205 02:14:32.905033   62151 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.162938380.tar: stat -c "%s %y" /var/lib/minikube/build/build.162938380.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.162938380.tar': No such file or directory
I0205 02:14:32.905061   62151 ssh_runner.go:362] scp /tmp/build.162938380.tar --> /var/lib/minikube/build/build.162938380.tar (3072 bytes)
I0205 02:14:32.927090   62151 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.162938380
I0205 02:14:32.935142   62151 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.162938380 -xf /var/lib/minikube/build/build.162938380.tar
I0205 02:14:32.943287   62151 crio.go:315] Building image: /var/lib/minikube/build/build.162938380
I0205 02:14:32.943351   62151 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-150463 /var/lib/minikube/build/build.162938380 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0205 02:14:34.187182   62151 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-150463 /var/lib/minikube/build/build.162938380 --cgroup-manager=cgroupfs: (1.243806131s)
I0205 02:14:34.187237   62151 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.162938380
I0205 02:14:34.195436   62151 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.162938380.tar
I0205 02:14:34.203559   62151 build_images.go:217] Built localhost/my-image:functional-150463 from /tmp/build.162938380.tar
I0205 02:14:34.203593   62151 build_images.go:133] succeeded building to: functional-150463
I0205 02:14:34.203599   62151 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-150463
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image load --daemon kicbase/echo-server:functional-150463 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-150463 image load --daemon kicbase/echo-server:functional-150463 --alsologtostderr: (1.154546021s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-150463 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-150463 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-150463 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 54782: os: process already finished
helpers_test.go:502: unable to terminate pid 54528: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-150463 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image load --daemon kicbase/echo-server:functional-150463 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-150463 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-150463
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image load --daemon kicbase/echo-server:functional-150463 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image save kicbase/echo-server:functional-150463 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image rm kicbase/echo-server:functional-150463 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-150463
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 image save --daemon kicbase/echo-server:functional-150463 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-150463
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-150463 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-150463 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-hctqr" [cdcf0818-1cd0-43f3-b1cc-c1e7df8f4f7d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-hctqr" [cdcf0818-1cd0-43f3-b1cc-c1e7df8f4f7d] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003332488s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 service list -o json
functional_test.go:1511: Took "884.043646ms" to run "out/minikube-linux-amd64 -p functional-150463 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:31958
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:31958
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "315.81643ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "49.774571ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "317.990754ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "50.848999ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (90.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-150463 /tmp/TestFunctionalparallelMountCmdany-port2833295931/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1738721565952644482" to /tmp/TestFunctionalparallelMountCmdany-port2833295931/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1738721565952644482" to /tmp/TestFunctionalparallelMountCmdany-port2833295931/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1738721565952644482" to /tmp/TestFunctionalparallelMountCmdany-port2833295931/001/test-1738721565952644482
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-150463 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (262.294417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0205 02:12:46.215314   19390 retry.go:31] will retry after 588.882084ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb  5 02:12 created-by-test
-rw-r--r-- 1 docker docker 24 Feb  5 02:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb  5 02:12 test-1738721565952644482
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh cat /mount-9p/test-1738721565952644482
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-150463 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [deabf349-132c-4d15-91b9-39655fc5a5bc] Pending
helpers_test.go:344: "busybox-mount" [deabf349-132c-4d15-91b9-39655fc5a5bc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0205 02:13:20.691847   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [deabf349-132c-4d15-91b9-39655fc5a5bc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [deabf349-132c-4d15-91b9-39655fc5a5bc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m28.00286252s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-150463 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-150463 /tmp/TestFunctionalparallelMountCmdany-port2833295931/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (90.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-150463 /tmp/TestFunctionalparallelMountCmdspecific-port3276311860/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-150463 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (250.889602ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0205 02:14:16.843873   19390 retry.go:31] will retry after 572.887382ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-150463 /tmp/TestFunctionalparallelMountCmdspecific-port3276311860/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-150463 ssh "sudo umount -f /mount-9p": exit status 1 (242.200717ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-150463 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-150463 /tmp/TestFunctionalparallelMountCmdspecific-port3276311860/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-150463 /tmp/TestFunctionalparallelMountCmdVerifyCleanup61532747/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-150463 /tmp/TestFunctionalparallelMountCmdVerifyCleanup61532747/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-150463 /tmp/TestFunctionalparallelMountCmdVerifyCleanup61532747/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-150463 ssh "findmnt -T" /mount1: exit status 1 (309.08379ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0205 02:14:18.663368   19390 retry.go:31] will retry after 335.92761ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-150463 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-150463 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-150463 /tmp/TestFunctionalparallelMountCmdVerifyCleanup61532747/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-150463 /tmp/TestFunctionalparallelMountCmdVerifyCleanup61532747/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-150463 /tmp/TestFunctionalparallelMountCmdVerifyCleanup61532747/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-150463 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
E0205 02:20:36.830199   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-150463
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-150463
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-150463
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (100.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-335894 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-335894 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m40.054916559s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (100.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-335894 -- rollout status deployment/busybox: (2.130878645s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-4nf9r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-6ngk6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-7cdgh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-4nf9r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-6ngk6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-7cdgh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-4nf9r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-6ngk6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-7cdgh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-4nf9r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-4nf9r -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-6ngk6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-6ngk6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-7cdgh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-335894 -- exec busybox-58667487b6-7cdgh -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (33.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-335894 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-335894 -v=7 --alsologtostderr: (32.19757528s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (33.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-335894 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp testdata/cp-test.txt ha-335894:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3344059205/001/cp-test_ha-335894.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894:/home/docker/cp-test.txt ha-335894-m02:/home/docker/cp-test_ha-335894_ha-335894-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m02 "sudo cat /home/docker/cp-test_ha-335894_ha-335894-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894:/home/docker/cp-test.txt ha-335894-m03:/home/docker/cp-test_ha-335894_ha-335894-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m03 "sudo cat /home/docker/cp-test_ha-335894_ha-335894-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894:/home/docker/cp-test.txt ha-335894-m04:/home/docker/cp-test_ha-335894_ha-335894-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m04 "sudo cat /home/docker/cp-test_ha-335894_ha-335894-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp testdata/cp-test.txt ha-335894-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3344059205/001/cp-test_ha-335894-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894-m02:/home/docker/cp-test.txt ha-335894:/home/docker/cp-test_ha-335894-m02_ha-335894.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894 "sudo cat /home/docker/cp-test_ha-335894-m02_ha-335894.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894-m02:/home/docker/cp-test.txt ha-335894-m03:/home/docker/cp-test_ha-335894-m02_ha-335894-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m03 "sudo cat /home/docker/cp-test_ha-335894-m02_ha-335894-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894-m02:/home/docker/cp-test.txt ha-335894-m04:/home/docker/cp-test_ha-335894-m02_ha-335894-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m04 "sudo cat /home/docker/cp-test_ha-335894-m02_ha-335894-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp testdata/cp-test.txt ha-335894-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3344059205/001/cp-test_ha-335894-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894-m03:/home/docker/cp-test.txt ha-335894:/home/docker/cp-test_ha-335894-m03_ha-335894.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894 "sudo cat /home/docker/cp-test_ha-335894-m03_ha-335894.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894-m03:/home/docker/cp-test.txt ha-335894-m02:/home/docker/cp-test_ha-335894-m03_ha-335894-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m02 "sudo cat /home/docker/cp-test_ha-335894-m03_ha-335894-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894-m03:/home/docker/cp-test.txt ha-335894-m04:/home/docker/cp-test_ha-335894-m03_ha-335894-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m04 "sudo cat /home/docker/cp-test_ha-335894-m03_ha-335894-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp testdata/cp-test.txt ha-335894-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3344059205/001/cp-test_ha-335894-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894-m04:/home/docker/cp-test.txt ha-335894:/home/docker/cp-test_ha-335894-m04_ha-335894.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894 "sudo cat /home/docker/cp-test_ha-335894-m04_ha-335894.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894-m04:/home/docker/cp-test.txt ha-335894-m02:/home/docker/cp-test_ha-335894-m04_ha-335894-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m02 "sudo cat /home/docker/cp-test_ha-335894-m04_ha-335894-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 cp ha-335894-m04:/home/docker/cp-test.txt ha-335894-m03:/home/docker/cp-test_ha-335894-m04_ha-335894-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 ssh -n ha-335894-m03 "sudo cat /home/docker/cp-test_ha-335894-m04_ha-335894-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-335894 node stop m02 -v=7 --alsologtostderr: (11.818056227s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-335894 status -v=7 --alsologtostderr: exit status 7 (644.449507ms)

                                                
                                                
-- stdout --
	ha-335894
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-335894-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-335894-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-335894-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 02:24:14.066807   86509 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:24:14.066917   86509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:24:14.066928   86509 out.go:358] Setting ErrFile to fd 2...
	I0205 02:24:14.066933   86509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:24:14.067167   86509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
	I0205 02:24:14.067351   86509 out.go:352] Setting JSON to false
	I0205 02:24:14.067382   86509 mustload.go:65] Loading cluster: ha-335894
	I0205 02:24:14.067413   86509 notify.go:220] Checking for updates...
	I0205 02:24:14.067904   86509 config.go:182] Loaded profile config "ha-335894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:24:14.067928   86509 status.go:174] checking status of ha-335894 ...
	I0205 02:24:14.068559   86509 cli_runner.go:164] Run: docker container inspect ha-335894 --format={{.State.Status}}
	I0205 02:24:14.086839   86509 status.go:371] ha-335894 host status = "Running" (err=<nil>)
	I0205 02:24:14.086869   86509 host.go:66] Checking if "ha-335894" exists ...
	I0205 02:24:14.087104   86509 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-335894
	I0205 02:24:14.107044   86509 host.go:66] Checking if "ha-335894" exists ...
	I0205 02:24:14.107276   86509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0205 02:24:14.107315   86509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-335894
	I0205 02:24:14.125269   86509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/ha-335894/id_rsa Username:docker}
	I0205 02:24:14.214613   86509 ssh_runner.go:195] Run: systemctl --version
	I0205 02:24:14.218320   86509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 02:24:14.228393   86509 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:24:14.276547   86509 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:74 SystemTime:2025-02-05 02:24:14.26681383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:24:14.277122   86509 kubeconfig.go:125] found "ha-335894" server: "https://192.168.49.254:8443"
	I0205 02:24:14.277161   86509 api_server.go:166] Checking apiserver status ...
	I0205 02:24:14.277194   86509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 02:24:14.287625   86509 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1530/cgroup
	I0205 02:24:14.295911   86509 api_server.go:182] apiserver freezer: "13:freezer:/docker/cc5531ea113c8577b2282a5279cb84cd4687a87afe2de37b58bfe44086d7b547/crio/crio-5ece5737193dae8d562bb7c6a31f06f29b16d4355a537964c3ff27d722e69b0b"
	I0205 02:24:14.295961   86509 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cc5531ea113c8577b2282a5279cb84cd4687a87afe2de37b58bfe44086d7b547/crio/crio-5ece5737193dae8d562bb7c6a31f06f29b16d4355a537964c3ff27d722e69b0b/freezer.state
	I0205 02:24:14.303700   86509 api_server.go:204] freezer state: "THAWED"
	I0205 02:24:14.303723   86509 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0205 02:24:14.307574   86509 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0205 02:24:14.307601   86509 status.go:463] ha-335894 apiserver status = Running (err=<nil>)
	I0205 02:24:14.307612   86509 status.go:176] ha-335894 status: &{Name:ha-335894 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:24:14.307630   86509 status.go:174] checking status of ha-335894-m02 ...
	I0205 02:24:14.307946   86509 cli_runner.go:164] Run: docker container inspect ha-335894-m02 --format={{.State.Status}}
	I0205 02:24:14.325280   86509 status.go:371] ha-335894-m02 host status = "Stopped" (err=<nil>)
	I0205 02:24:14.325302   86509 status.go:384] host is not running, skipping remaining checks
	I0205 02:24:14.325308   86509 status.go:176] ha-335894-m02 status: &{Name:ha-335894-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:24:14.325325   86509 status.go:174] checking status of ha-335894-m03 ...
	I0205 02:24:14.325606   86509 cli_runner.go:164] Run: docker container inspect ha-335894-m03 --format={{.State.Status}}
	I0205 02:24:14.343894   86509 status.go:371] ha-335894-m03 host status = "Running" (err=<nil>)
	I0205 02:24:14.343917   86509 host.go:66] Checking if "ha-335894-m03" exists ...
	I0205 02:24:14.344155   86509 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-335894-m03
	I0205 02:24:14.360935   86509 host.go:66] Checking if "ha-335894-m03" exists ...
	I0205 02:24:14.361267   86509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0205 02:24:14.361305   86509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-335894-m03
	I0205 02:24:14.378799   86509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/ha-335894-m03/id_rsa Username:docker}
	I0205 02:24:14.466409   86509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 02:24:14.476698   86509 kubeconfig.go:125] found "ha-335894" server: "https://192.168.49.254:8443"
	I0205 02:24:14.476723   86509 api_server.go:166] Checking apiserver status ...
	I0205 02:24:14.476749   86509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 02:24:14.486126   86509 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1439/cgroup
	I0205 02:24:14.494410   86509 api_server.go:182] apiserver freezer: "13:freezer:/docker/8881b8f54d514c0d57f2fdd8dca8eeebdb6013acf03cd6444b6fce40b0f06b14/crio/crio-b6772a142773bdecdad612efe6697def217761a6ef49213c20483cdf6b85a425"
	I0205 02:24:14.494464   86509 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8881b8f54d514c0d57f2fdd8dca8eeebdb6013acf03cd6444b6fce40b0f06b14/crio/crio-b6772a142773bdecdad612efe6697def217761a6ef49213c20483cdf6b85a425/freezer.state
	I0205 02:24:14.502176   86509 api_server.go:204] freezer state: "THAWED"
	I0205 02:24:14.502201   86509 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0205 02:24:14.507521   86509 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0205 02:24:14.507548   86509 status.go:463] ha-335894-m03 apiserver status = Running (err=<nil>)
	I0205 02:24:14.507558   86509 status.go:176] ha-335894-m03 status: &{Name:ha-335894-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:24:14.507575   86509 status.go:174] checking status of ha-335894-m04 ...
	I0205 02:24:14.507829   86509 cli_runner.go:164] Run: docker container inspect ha-335894-m04 --format={{.State.Status}}
	I0205 02:24:14.526540   86509 status.go:371] ha-335894-m04 host status = "Running" (err=<nil>)
	I0205 02:24:14.526564   86509 host.go:66] Checking if "ha-335894-m04" exists ...
	I0205 02:24:14.526827   86509 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-335894-m04
	I0205 02:24:14.544453   86509 host.go:66] Checking if "ha-335894-m04" exists ...
	I0205 02:24:14.544699   86509 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0205 02:24:14.544733   86509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-335894-m04
	I0205 02:24:14.563123   86509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/ha-335894-m04/id_rsa Username:docker}
	I0205 02:24:14.654605   86509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 02:24:14.665079   86509 status.go:176] ha-335894-m04 status: &{Name:ha-335894-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-335894 node start m02 -v=7 --alsologtostderr: (19.346880746s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (177.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-335894 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-335894 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-335894 -v=7 --alsologtostderr: (36.434173476s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-335894 --wait=true -v=7 --alsologtostderr
E0205 02:25:36.832821   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:21.878115   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:21.884492   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:21.896261   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:21.917731   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:21.959160   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:22.040595   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:22.202461   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:22.524722   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:23.166828   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:24.448846   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:27.010652   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:32.132870   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:42.375163   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:26:59.895573   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:27:02.857396   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-335894 --wait=true -v=7 --alsologtostderr: (2m20.978929543s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-335894
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (177.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 node delete m03 -v=7 --alsologtostderr
E0205 02:27:43.819549   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-335894 node delete m03 -v=7 --alsologtostderr: (10.947337508s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-335894 stop -v=7 --alsologtostderr: (35.339880403s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-335894 status -v=7 --alsologtostderr: exit status 7 (101.144867ms)

                                                
                                                
-- stdout --
	ha-335894
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-335894-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-335894-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 02:28:21.605659  103639 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:28:21.605769  103639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:28:21.605777  103639 out.go:358] Setting ErrFile to fd 2...
	I0205 02:28:21.605781  103639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:28:21.605968  103639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
	I0205 02:28:21.606128  103639 out.go:352] Setting JSON to false
	I0205 02:28:21.606155  103639 mustload.go:65] Loading cluster: ha-335894
	I0205 02:28:21.606310  103639 notify.go:220] Checking for updates...
	I0205 02:28:21.606507  103639 config.go:182] Loaded profile config "ha-335894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:28:21.606524  103639 status.go:174] checking status of ha-335894 ...
	I0205 02:28:21.606938  103639 cli_runner.go:164] Run: docker container inspect ha-335894 --format={{.State.Status}}
	I0205 02:28:21.625366  103639 status.go:371] ha-335894 host status = "Stopped" (err=<nil>)
	I0205 02:28:21.625388  103639 status.go:384] host is not running, skipping remaining checks
	I0205 02:28:21.625394  103639 status.go:176] ha-335894 status: &{Name:ha-335894 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:28:21.625444  103639 status.go:174] checking status of ha-335894-m02 ...
	I0205 02:28:21.625730  103639 cli_runner.go:164] Run: docker container inspect ha-335894-m02 --format={{.State.Status}}
	I0205 02:28:21.642994  103639 status.go:371] ha-335894-m02 host status = "Stopped" (err=<nil>)
	I0205 02:28:21.643025  103639 status.go:384] host is not running, skipping remaining checks
	I0205 02:28:21.643032  103639 status.go:176] ha-335894-m02 status: &{Name:ha-335894-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:28:21.643055  103639 status.go:174] checking status of ha-335894-m04 ...
	I0205 02:28:21.643321  103639 cli_runner.go:164] Run: docker container inspect ha-335894-m04 --format={{.State.Status}}
	I0205 02:28:21.660777  103639 status.go:371] ha-335894-m04 host status = "Stopped" (err=<nil>)
	I0205 02:28:21.660813  103639 status.go:384] host is not running, skipping remaining checks
	I0205 02:28:21.660824  103639 status.go:176] ha-335894-m04 status: &{Name:ha-335894-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (87.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-335894 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0205 02:29:05.741201   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-335894 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m26.646851564s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (87.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (40.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-335894 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-335894 --control-plane -v=7 --alsologtostderr: (39.305967218s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-335894 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (40.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-806091 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-806091 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (43.798337517s)
--- PASS: TestJSONOutput/start/Command (43.80s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-806091 --output=json --user=testUser
E0205 02:31:21.878243   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-806091 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-806091 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-806091 --output=json --user=testUser: (5.689983127s)
--- PASS: TestJSONOutput/stop/Command (5.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-173896 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-173896 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.069155ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b76be555-6f32-4f15-a994-462b56036f08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-173896] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e1a090d-8eb1-45e8-8974-6cf63838666b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20363"}}
	{"specversion":"1.0","id":"4ff759d7-d75c-4c5f-96cb-d5c25c258b2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4aef239d-c83d-45a6-8bda-888d1fa1201f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig"}}
	{"specversion":"1.0","id":"111355d8-23d3-424f-bfc3-4ae2831803f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube"}}
	{"specversion":"1.0","id":"088f1b81-a8a0-4de8-8c04-0514ea3ae7b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"36932dcc-4e79-49fb-9541-bd94563243de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"91d1e4d4-0a12-4769-bbdc-0f02081dd23e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-173896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-173896
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-298024 --network=
E0205 02:31:49.584041   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-298024 --network=: (25.544841415s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-298024" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-298024
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-298024: (2.082151958s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.65s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-505192 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-505192 --network=bridge: (20.680587825s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-505192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-505192
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-505192: (1.903426561s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.60s)

                                                
                                    
x
+
TestKicExistingNetwork (22.7s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0205 02:32:23.834866   19390 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0205 02:32:23.851842   19390 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0205 02:32:23.851916   19390 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0205 02:32:23.851933   19390 cli_runner.go:164] Run: docker network inspect existing-network
W0205 02:32:23.868357   19390 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0205 02:32:23.868387   19390 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0205 02:32:23.868402   19390 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0205 02:32:23.868558   19390 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0205 02:32:23.885893   19390 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-86850cebc981 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:8b:ea:4c:15} reservation:<nil>}
I0205 02:32:23.886316   19390 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f19c20}
I0205 02:32:23.886343   19390 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0205 02:32:23.886382   19390 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0205 02:32:23.949373   19390 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-395922 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-395922 --network=existing-network: (20.704421823s)
helpers_test.go:175: Cleaning up "existing-network-395922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-395922
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-395922: (1.845110802s)
I0205 02:32:46.516642   19390 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.70s)

                                                
                                    
x
+
TestKicCustomSubnet (23.87s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-880400 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-880400 --subnet=192.168.60.0/24: (21.792217674s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-880400 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-880400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-880400
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-880400: (2.055444199s)
--- PASS: TestKicCustomSubnet (23.87s)

                                                
                                    
x
+
TestKicStaticIP (23.4s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-718476 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-718476 --static-ip=192.168.200.200: (21.22912321s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-718476 ip
helpers_test.go:175: Cleaning up "static-ip-718476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-718476
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-718476: (2.046262984s)
--- PASS: TestKicStaticIP (23.40s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (46.75s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-826862 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-826862 --driver=docker  --container-runtime=crio: (21.048927643s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-839946 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-839946 --driver=docker  --container-runtime=crio: (20.543179895s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-826862
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-839946
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-839946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-839946
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-839946: (1.840564339s)
helpers_test.go:175: Cleaning up "first-826862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-826862
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-826862: (2.161334358s)
--- PASS: TestMinikubeProfile (46.75s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-512077 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-512077 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.372115094s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-512077 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-525883 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-525883 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.33846724s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-525883 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-512077 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-512077 --alsologtostderr -v=5: (1.580924546s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-525883 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-525883
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-525883: (1.168498184s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.13s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-525883
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-525883: (6.126471915s)
--- PASS: TestMountStart/serial/RestartStopped (7.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-525883 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-522122 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0205 02:35:36.830017   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-522122 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m6.827654943s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-522122 -- rollout status deployment/busybox: (1.997655066s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- exec busybox-58667487b6-9jm5q -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- exec busybox-58667487b6-xgbqs -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- exec busybox-58667487b6-9jm5q -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- exec busybox-58667487b6-xgbqs -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- exec busybox-58667487b6-9jm5q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- exec busybox-58667487b6-xgbqs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- exec busybox-58667487b6-9jm5q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- exec busybox-58667487b6-9jm5q -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- exec busybox-58667487b6-xgbqs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-522122 -- exec busybox-58667487b6-xgbqs -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (31.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-522122 -v 3 --alsologtostderr
E0205 02:36:21.878817   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-522122 -v 3 --alsologtostderr: (30.512411789s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (31.11s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-522122 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 cp testdata/cp-test.txt multinode-522122:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 cp multinode-522122:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1165936795/001/cp-test_multinode-522122.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 cp multinode-522122:/home/docker/cp-test.txt multinode-522122-m02:/home/docker/cp-test_multinode-522122_multinode-522122-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122-m02 "sudo cat /home/docker/cp-test_multinode-522122_multinode-522122-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 cp multinode-522122:/home/docker/cp-test.txt multinode-522122-m03:/home/docker/cp-test_multinode-522122_multinode-522122-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122-m03 "sudo cat /home/docker/cp-test_multinode-522122_multinode-522122-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 cp testdata/cp-test.txt multinode-522122-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 cp multinode-522122-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1165936795/001/cp-test_multinode-522122-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 cp multinode-522122-m02:/home/docker/cp-test.txt multinode-522122:/home/docker/cp-test_multinode-522122-m02_multinode-522122.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122 "sudo cat /home/docker/cp-test_multinode-522122-m02_multinode-522122.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 cp multinode-522122-m02:/home/docker/cp-test.txt multinode-522122-m03:/home/docker/cp-test_multinode-522122-m02_multinode-522122-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122-m03 "sudo cat /home/docker/cp-test_multinode-522122-m02_multinode-522122-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 cp testdata/cp-test.txt multinode-522122-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 cp multinode-522122-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1165936795/001/cp-test_multinode-522122-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 cp multinode-522122-m03:/home/docker/cp-test.txt multinode-522122:/home/docker/cp-test_multinode-522122-m03_multinode-522122.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122 "sudo cat /home/docker/cp-test_multinode-522122-m03_multinode-522122.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 cp multinode-522122-m03:/home/docker/cp-test.txt multinode-522122-m02:/home/docker/cp-test_multinode-522122-m03_multinode-522122-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 ssh -n multinode-522122-m02 "sudo cat /home/docker/cp-test_multinode-522122-m03_multinode-522122-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-522122 node stop m03: (1.172683957s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-522122 status: exit status 7 (451.5004ms)

                                                
                                                
-- stdout --
	multinode-522122
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-522122-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-522122-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-522122 status --alsologtostderr: exit status 7 (459.891102ms)

                                                
                                                
-- stdout --
	multinode-522122
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-522122-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-522122-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 02:36:37.649654  168920 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:36:37.649768  168920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:36:37.649777  168920 out.go:358] Setting ErrFile to fd 2...
	I0205 02:36:37.649781  168920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:36:37.649960  168920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
	I0205 02:36:37.650146  168920 out.go:352] Setting JSON to false
	I0205 02:36:37.650183  168920 mustload.go:65] Loading cluster: multinode-522122
	I0205 02:36:37.650232  168920 notify.go:220] Checking for updates...
	I0205 02:36:37.650598  168920 config.go:182] Loaded profile config "multinode-522122": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:36:37.650616  168920 status.go:174] checking status of multinode-522122 ...
	I0205 02:36:37.651022  168920 cli_runner.go:164] Run: docker container inspect multinode-522122 --format={{.State.Status}}
	I0205 02:36:37.668940  168920 status.go:371] multinode-522122 host status = "Running" (err=<nil>)
	I0205 02:36:37.668961  168920 host.go:66] Checking if "multinode-522122" exists ...
	I0205 02:36:37.669205  168920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-522122
	I0205 02:36:37.686241  168920 host.go:66] Checking if "multinode-522122" exists ...
	I0205 02:36:37.686522  168920 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0205 02:36:37.686569  168920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-522122
	I0205 02:36:37.704586  168920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/multinode-522122/id_rsa Username:docker}
	I0205 02:36:37.794540  168920 ssh_runner.go:195] Run: systemctl --version
	I0205 02:36:37.798416  168920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 02:36:37.808917  168920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:36:37.859166  168920 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:64 SystemTime:2025-02-05 02:36:37.849066026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:36:37.859761  168920 kubeconfig.go:125] found "multinode-522122" server: "https://192.168.67.2:8443"
	I0205 02:36:37.859793  168920 api_server.go:166] Checking apiserver status ...
	I0205 02:36:37.859830  168920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0205 02:36:37.869714  168920 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1497/cgroup
	I0205 02:36:37.877992  168920 api_server.go:182] apiserver freezer: "13:freezer:/docker/e68ec39d8475b947c3723f92939c9073790df29b9337d65ff57a5c67294d5c97/crio/crio-488ae8457dec49769ad8bf34f16ae12cb67fb8028c5657d9608e8943251e9ca7"
	I0205 02:36:37.878065  168920 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e68ec39d8475b947c3723f92939c9073790df29b9337d65ff57a5c67294d5c97/crio/crio-488ae8457dec49769ad8bf34f16ae12cb67fb8028c5657d9608e8943251e9ca7/freezer.state
	I0205 02:36:37.885477  168920 api_server.go:204] freezer state: "THAWED"
	I0205 02:36:37.885504  168920 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0205 02:36:37.889675  168920 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0205 02:36:37.889696  168920 status.go:463] multinode-522122 apiserver status = Running (err=<nil>)
	I0205 02:36:37.889705  168920 status.go:176] multinode-522122 status: &{Name:multinode-522122 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:36:37.889720  168920 status.go:174] checking status of multinode-522122-m02 ...
	I0205 02:36:37.890005  168920 cli_runner.go:164] Run: docker container inspect multinode-522122-m02 --format={{.State.Status}}
	I0205 02:36:37.908430  168920 status.go:371] multinode-522122-m02 host status = "Running" (err=<nil>)
	I0205 02:36:37.908451  168920 host.go:66] Checking if "multinode-522122-m02" exists ...
	I0205 02:36:37.908751  168920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-522122-m02
	I0205 02:36:37.926134  168920 host.go:66] Checking if "multinode-522122-m02" exists ...
	I0205 02:36:37.926389  168920 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0205 02:36:37.926423  168920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-522122-m02
	I0205 02:36:37.943888  168920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/20363-12617/.minikube/machines/multinode-522122-m02/id_rsa Username:docker}
	I0205 02:36:38.034485  168920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0205 02:36:38.044763  168920 status.go:176] multinode-522122-m02 status: &{Name:multinode-522122-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:36:38.044803  168920 status.go:174] checking status of multinode-522122-m03 ...
	I0205 02:36:38.045077  168920 cli_runner.go:164] Run: docker container inspect multinode-522122-m03 --format={{.State.Status}}
	I0205 02:36:38.062600  168920 status.go:371] multinode-522122-m03 host status = "Stopped" (err=<nil>)
	I0205 02:36:38.062621  168920 status.go:384] host is not running, skipping remaining checks
	I0205 02:36:38.062628  168920 status.go:176] multinode-522122-m03 status: &{Name:multinode-522122-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-522122 node start m03 -v=7 --alsologtostderr: (8.31786533s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (85.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-522122
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-522122
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-522122: (24.672128124s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-522122 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-522122 --wait=true -v=8 --alsologtostderr: (1m0.846873304s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-522122
--- PASS: TestMultiNode/serial/RestartKeepsNodes (85.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-522122 node delete m03: (4.388502213s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-522122 stop: (23.544517323s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-522122 status: exit status 7 (89.401991ms)

                                                
                                                
-- stdout --
	multinode-522122
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-522122-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-522122 status --alsologtostderr: exit status 7 (83.782287ms)

                                                
                                                
-- stdout --
	multinode-522122
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-522122-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 02:38:41.279461  178208 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:38:41.279705  178208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:38:41.279713  178208 out.go:358] Setting ErrFile to fd 2...
	I0205 02:38:41.279717  178208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:38:41.279914  178208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
	I0205 02:38:41.280086  178208 out.go:352] Setting JSON to false
	I0205 02:38:41.280113  178208 mustload.go:65] Loading cluster: multinode-522122
	I0205 02:38:41.280292  178208 notify.go:220] Checking for updates...
	I0205 02:38:41.280552  178208 config.go:182] Loaded profile config "multinode-522122": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:38:41.280574  178208 status.go:174] checking status of multinode-522122 ...
	I0205 02:38:41.281099  178208 cli_runner.go:164] Run: docker container inspect multinode-522122 --format={{.State.Status}}
	I0205 02:38:41.299615  178208 status.go:371] multinode-522122 host status = "Stopped" (err=<nil>)
	I0205 02:38:41.299636  178208 status.go:384] host is not running, skipping remaining checks
	I0205 02:38:41.299644  178208 status.go:176] multinode-522122 status: &{Name:multinode-522122 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0205 02:38:41.299671  178208 status.go:174] checking status of multinode-522122-m02 ...
	I0205 02:38:41.299907  178208 cli_runner.go:164] Run: docker container inspect multinode-522122-m02 --format={{.State.Status}}
	I0205 02:38:41.317169  178208 status.go:371] multinode-522122-m02 host status = "Stopped" (err=<nil>)
	I0205 02:38:41.317188  178208 status.go:384] host is not running, skipping remaining checks
	I0205 02:38:41.317194  178208 status.go:176] multinode-522122-m02 status: &{Name:multinode-522122-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-522122 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-522122 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (49.056216689s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-522122 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.61s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-522122
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-522122-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-522122-m02 --driver=docker  --container-runtime=crio: exit status 14 (66.289191ms)

                                                
                                                
-- stdout --
	* [multinode-522122-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-522122-m02' is duplicated with machine name 'multinode-522122-m02' in profile 'multinode-522122'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-522122-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-522122-m03 --driver=docker  --container-runtime=crio: (20.780646917s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-522122
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-522122: exit status 80 (266.922153ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-522122 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-522122-m03 already exists in multinode-522122-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-522122-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-522122-m03: (1.837957579s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.00s)

                                                
                                    
x
+
TestPreload (103.47s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-511859 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0205 02:40:36.833806   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-511859 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m17.117176349s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-511859 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-511859 image pull gcr.io/k8s-minikube/busybox: (1.208615586s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-511859
E0205 02:41:21.881738   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-511859: (5.730207921s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-511859 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-511859 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (16.849373057s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-511859 image list
helpers_test.go:175: Cleaning up "test-preload-511859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-511859
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-511859: (2.324942411s)
--- PASS: TestPreload (103.47s)

                                                
                                    
x
+
TestScheduledStopUnix (100.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-284149 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-284149 --memory=2048 --driver=docker  --container-runtime=crio: (24.074063027s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-284149 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-284149 -n scheduled-stop-284149
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-284149 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0205 02:42:05.849107   19390 retry.go:31] will retry after 77.711µs: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.850270   19390 retry.go:31] will retry after 138.894µs: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.851429   19390 retry.go:31] will retry after 145.574µs: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.852580   19390 retry.go:31] will retry after 241.389µs: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.853721   19390 retry.go:31] will retry after 460.811µs: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.854877   19390 retry.go:31] will retry after 901.896µs: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.856012   19390 retry.go:31] will retry after 1.337747ms: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.858270   19390 retry.go:31] will retry after 1.586315ms: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.860618   19390 retry.go:31] will retry after 1.377251ms: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.862943   19390 retry.go:31] will retry after 3.184882ms: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.867206   19390 retry.go:31] will retry after 3.528335ms: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.871506   19390 retry.go:31] will retry after 5.825199ms: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.877812   19390 retry.go:31] will retry after 15.224084ms: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.894137   19390 retry.go:31] will retry after 13.591145ms: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.908472   19390 retry.go:31] will retry after 15.041976ms: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
I0205 02:42:05.923710   19390 retry.go:31] will retry after 39.863212ms: open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/scheduled-stop-284149/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-284149 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-284149 -n scheduled-stop-284149
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-284149
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-284149 --schedule 15s
E0205 02:42:44.947897   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-284149
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-284149: exit status 7 (70.936688ms)

                                                
                                                
-- stdout --
	scheduled-stop-284149
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-284149 -n scheduled-stop-284149
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-284149 -n scheduled-stop-284149: exit status 7 (71.751837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-284149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-284149
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-284149: (4.897654177s)
--- PASS: TestScheduledStopUnix (100.38s)

                                                
                                    
x
+
TestInsufficientStorage (10.11s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-129093 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-129093 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.67427391s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a71048dd-c366-4013-88c6-32a6d17f140b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-129093] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"89747ce9-3dde-4ccf-85eb-2e6d2bfa0bb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20363"}}
	{"specversion":"1.0","id":"54eb9fa3-9b38-47f1-9edb-65ff83f16282","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"35ee94ce-0bf3-49ba-9284-33c8fc425dc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig"}}
	{"specversion":"1.0","id":"5331e7cb-b95f-4bd2-9764-facdf84ede6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube"}}
	{"specversion":"1.0","id":"bb7b7462-1d2a-4bcd-8c52-e8378e5fc3c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9c85e381-4e64-4c85-9f9c-dc83dfaa1369","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"33d82541-29aa-4096-97b0-d7964cd12340","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"74eb11c5-3bcf-448a-bc9c-88925bdf2d08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"481aa7d8-cedd-4cc3-853e-126eb9f4c326","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f20cc0e5-12b1-43ce-a57e-84271ea1ef97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"dd7a795f-eb02-4548-b119-d63c47bc6318","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-129093\" primary control-plane node in \"insufficient-storage-129093\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"da164dbb-cac9-4438-8460-3187013da75e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba080168-7b9c-47f4-a7d7-dccf68b32568","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ea1b2058-e61b-4145-8839-a205d869b35f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-129093 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-129093 --output=json --layout=cluster: exit status 7 (271.274627ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-129093","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-129093","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0205 02:43:29.667909  200558 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-129093" does not appear in /home/jenkins/minikube-integration/20363-12617/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-129093 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-129093 --output=json --layout=cluster: exit status 7 (270.258855ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-129093","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-129093","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0205 02:43:29.939520  200658 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-129093" does not appear in /home/jenkins/minikube-integration/20363-12617/kubeconfig
	E0205 02:43:29.949895  200658 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/insufficient-storage-129093/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-129093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-129093
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-129093: (1.892297513s)
--- PASS: TestInsufficientStorage (10.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (53.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.677244168 start -p running-upgrade-488586 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.677244168 start -p running-upgrade-488586 --memory=2200 --vm-driver=docker  --container-runtime=crio: (28.344344897s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-488586 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-488586 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.636860454s)
helpers_test.go:175: Cleaning up "running-upgrade-488586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-488586
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-488586: (2.530198039s)
--- PASS: TestRunningBinaryUpgrade (53.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.21s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-925222 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0205 02:45:36.831364   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-925222 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.222560653s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-925222
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-925222: (4.964494734s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-925222 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-925222 status --format={{.Host}}: exit status 7 (86.733655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-925222 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-925222 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.434885673s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-925222 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-925222 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-925222 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (110.501172ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-925222] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-925222
	    minikube start -p kubernetes-upgrade-925222 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9252222 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-925222 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-925222 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-925222 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.855877976s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-925222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-925222
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-925222: (2.473520395s)
--- PASS: TestKubernetesUpgrade (354.21s)

                                                
                                    
x
+
TestMissingContainerUpgrade (115.55s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3512893697 start -p missing-upgrade-269786 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3512893697 start -p missing-upgrade-269786 --memory=2200 --driver=docker  --container-runtime=crio: (48.397705771s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-269786
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-269786: (10.394052548s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-269786
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-269786 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-269786 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.292468756s)
helpers_test.go:175: Cleaning up "missing-upgrade-269786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-269786
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-269786: (1.967592372s)
--- PASS: TestMissingContainerUpgrade (115.55s)

                                                
                                    
x
+
TestPause/serial/Start (54.61s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-428235 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0205 02:43:39.899982   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-428235 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (54.605882509s)
--- PASS: TestPause/serial/Start (54.61s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (20.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-428235 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-428235 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.440246401s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (20.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (93.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3197214271 start -p stopped-upgrade-178609 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3197214271 start -p stopped-upgrade-178609 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m5.418842615s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3197214271 -p stopped-upgrade-178609 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3197214271 -p stopped-upgrade-178609 stop: (2.318874989s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-178609 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-178609 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.438348581s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (93.18s)

                                                
                                    
x
+
TestPause/serial/Pause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-428235 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.69s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-428235 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-428235 --output=json --layout=cluster: exit status 2 (314.62058ms)

                                                
                                                
-- stdout --
	{"Name":"pause-428235","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-428235","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-428235 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.86s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-428235 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.86s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.44s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-428235 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-428235 --alsologtostderr -v=5: (3.441241066s)
--- PASS: TestPause/serial/DeletePaused (3.44s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.72s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-428235
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-428235: exit status 1 (22.84589ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-428235: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-178609
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-315000 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
E0205 02:46:21.878087   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-315000 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (654.516299ms)

                                                
                                                
-- stdout --
	* [false-315000] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0205 02:46:21.301322  239047 out.go:345] Setting OutFile to fd 1 ...
	I0205 02:46:21.301437  239047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:46:21.301449  239047 out.go:358] Setting ErrFile to fd 2...
	I0205 02:46:21.301455  239047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0205 02:46:21.301700  239047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20363-12617/.minikube/bin
	I0205 02:46:21.302297  239047 out.go:352] Setting JSON to false
	I0205 02:46:21.303549  239047 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5327,"bootTime":1738718254,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0205 02:46:21.303663  239047 start.go:139] virtualization: kvm guest
	I0205 02:46:21.334281  239047 out.go:177] * [false-315000] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0205 02:46:21.397904  239047 notify.go:220] Checking for updates...
	I0205 02:46:21.457875  239047 out.go:177]   - MINIKUBE_LOCATION=20363
	I0205 02:46:21.473160  239047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0205 02:46:21.476971  239047 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	I0205 02:46:21.527252  239047 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	I0205 02:46:21.529729  239047 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0205 02:46:21.532554  239047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0205 02:46:21.584148  239047 config.go:182] Loaded profile config "cert-expiration-472446": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:46:21.584276  239047 config.go:182] Loaded profile config "kubernetes-upgrade-925222": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0205 02:46:21.584392  239047 config.go:182] Loaded profile config "missing-upgrade-269786": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0205 02:46:21.584523  239047 driver.go:394] Setting default libvirt URI to qemu:///system
	I0205 02:46:21.607866  239047 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0205 02:46:21.607951  239047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0205 02:46:21.659142  239047 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:71 SystemTime:2025-02-05 02:46:21.649353948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0205 02:46:21.659272  239047 docker.go:318] overlay module found
	I0205 02:46:21.688365  239047 out.go:177] * Using the docker driver based on user configuration
	I0205 02:46:21.691494  239047 start.go:297] selected driver: docker
	I0205 02:46:21.691520  239047 start.go:901] validating driver "docker" against <nil>
	I0205 02:46:21.691534  239047 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0205 02:46:21.736266  239047 out.go:201] 
	W0205 02:46:21.761926  239047 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0205 02:46:21.831674  239047 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-315000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-315000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-315000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-315000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-315000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-315000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-315000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-315000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-315000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-315000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-315000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-315000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-315000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-315000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:44:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-472446
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:46:11 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-925222
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:45:41 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-269786
contexts:
- context:
cluster: cert-expiration-472446
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:44:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-472446
name: cert-expiration-472446
- context:
cluster: kubernetes-upgrade-925222
user: kubernetes-upgrade-925222
name: kubernetes-upgrade-925222
- context:
cluster: missing-upgrade-269786
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:45:41 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-269786
name: missing-upgrade-269786
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-472446
user:
client-certificate: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/cert-expiration-472446/client.crt
client-key: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/cert-expiration-472446/client.key
- name: kubernetes-upgrade-925222
user:
client-certificate: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kubernetes-upgrade-925222/client.crt
client-key: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kubernetes-upgrade-925222/client.key
- name: missing-upgrade-269786
user:
client-certificate: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/missing-upgrade-269786/client.crt
client-key: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/missing-upgrade-269786/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-315000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-315000"

                                                
                                                
----------------------- debugLogs end: false-315000 [took: 4.005319306s] --------------------------------
helpers_test.go:175: Cleaning up "false-315000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-315000
--- PASS: TestNetworkPlugins/group/false (4.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-193245 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-193245 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (82.330101ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-193245] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20363
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20363-12617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20363-12617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (31.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-193245 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-193245 --driver=docker  --container-runtime=crio: (31.420788451s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-193245 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (31.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-193245 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-193245 --no-kubernetes --driver=docker  --container-runtime=crio: (3.721399181s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-193245 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-193245 status -o json: exit status 2 (293.00973ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-193245","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-193245
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-193245: (1.93816157s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-193245 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-193245 --no-kubernetes --driver=docker  --container-runtime=crio: (4.877893038s)
--- PASS: TestNoKubernetes/serial/Start (4.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-193245 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-193245 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.135683ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (25.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (6.336441929s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (19.011041344s)
--- PASS: TestNoKubernetes/serial/ProfileList (25.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (45.926966189s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-193245
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-193245: (1.42118146s)
--- PASS: TestNoKubernetes/serial/Stop (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-193245 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-193245 --driver=docker  --container-runtime=crio: (7.152485218s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-193245 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-193245 "sudo systemctl is-active --quiet service kubelet": exit status 1 (319.188193ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.941355492s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (59.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (59.881296809s)
--- PASS: TestNetworkPlugins/group/calico/Start (59.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-315000 "pgrep -a kubelet"
I0205 02:48:11.743049   19390 config.go:182] Loaded profile config "auto-315000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-315000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5q555" [83f32cab-000c-442f-9630-358353c6a967] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-5q555" [83f32cab-000c-442f-9630-358353c6a967] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004107348s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-315000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-315000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-315000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-plrgl" [5b71fca7-ba2a-489c-beb3-f0c3f9223cdf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006447274s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-315000 "pgrep -a kubelet"
I0205 02:48:36.885890   19390 config.go:182] Loaded profile config "kindnet-315000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-315000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-lhvrm" [3de3fd91-fae7-4eb7-acc6-8d939d69f424] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-lhvrm" [3de3fd91-fae7-4eb7-acc6-8d939d69f424] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004088317s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (49.473857269s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-315000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-315000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-315000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-p52b9" [44568d71-5ab1-4563-942f-374de04911d2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004553394s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-315000 "pgrep -a kubelet"
I0205 02:48:55.072044   19390 config.go:182] Loaded profile config "calico-315000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-315000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ns9mr" [a0a8ed52-d761-44f3-9826-024ed56fc1f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-ns9mr" [a0a8ed52-d761-44f3-9826-024ed56fc1f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005986419s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-315000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-315000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-315000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m8.898129165s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-315000 "pgrep -a kubelet"
I0205 02:49:32.391162   19390 config.go:182] Loaded profile config "custom-flannel-315000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-315000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-kv2z5" [fe97aa08-c8ae-4eca-8bc3-8fcd71ce93c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-kv2z5" [fe97aa08-c8ae-4eca-8bc3-8fcd71ce93c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003532295s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-315000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-315000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-315000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (33.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-315000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (33.511100571s)
--- PASS: TestNetworkPlugins/group/bridge/Start (33.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-315000 "pgrep -a kubelet"
I0205 02:50:16.884058   19390 config.go:182] Loaded profile config "enable-default-cni-315000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-315000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jxnng" [ac8173e6-288d-4253-b883-e30019f6e3a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jxnng" [ac8173e6-288d-4253-b883-e30019f6e3a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003435701s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-315000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-315000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-315000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-315000 "pgrep -a kubelet"
I0205 02:50:36.752410   19390 config.go:182] Loaded profile config "bridge-315000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-315000 replace --force -f testdata/netcat-deployment.yaml
E0205 02:50:36.829948   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5jwnt" [5981957a-0b22-45b5-9c42-ff02bb36a080] Pending
helpers_test.go:344: "netcat-5d86dc444-5jwnt" [5981957a-0b22-45b5-9c42-ff02bb36a080] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003202517s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (127.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-418372 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-418372 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m7.537622172s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (127.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (20.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-315000 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-315000 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.121411604s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0205 02:51:04.087005   19390 retry.go:31] will retry after 637.065178ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context bridge-315000 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-315000 exec deployment/netcat -- nslookup kubernetes.default: (5.193332568s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (20.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (59.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-110205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-110205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (59.445568326s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (59.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-315000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-315000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-678105 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-678105 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (45.247778346s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-110205 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [47cf0501-33d3-46a4-b8ec-feee772197d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [47cf0501-33d3-46a4-b8ec-feee772197d9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004245969s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-110205 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-110205 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-110205 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-110205 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-110205 --alsologtostderr -v=3: (11.925400098s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-678105 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3b5f6f77-f620-4ed0-a2d7-be339d754e63] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3b5f6f77-f620-4ed0-a2d7-be339d754e63] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00384695s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-678105 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-110205 -n no-preload-110205
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-110205 -n no-preload-110205: exit status 7 (76.436204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-110205 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (273.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-110205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-110205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m33.495634919s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-110205 -n no-preload-110205
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (273.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-678105 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-678105 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-678105 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-678105 --alsologtostderr -v=3: (12.397661079s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-678105 -n embed-certs-678105
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-678105 -n embed-certs-678105: exit status 7 (83.952179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-678105 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-678105 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-678105 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m22.035899963s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-678105 -n embed-certs-678105
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-418372 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e06504b3-60ae-4db9-bdeb-0829b87fda32] Pending
helpers_test.go:344: "busybox" [e06504b3-60ae-4db9-bdeb-0829b87fda32] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e06504b3-60ae-4db9-bdeb-0829b87fda32] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003602425s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-418372 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-418372 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-418372 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-418372 --alsologtostderr -v=3
E0205 02:53:11.986253   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:11.992658   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:12.004088   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:12.025601   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:12.067060   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:12.148521   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:12.310007   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:12.631878   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:13.274129   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:14.555765   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-418372 --alsologtostderr -v=3: (11.939203911s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-418372 -n old-k8s-version-418372
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-418372 -n old-k8s-version-418372: exit status 7 (75.371749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-418372 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (132.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-418372 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0205 02:53:17.117398   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:22.239596   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:30.558559   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:30.565031   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:30.576946   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:30.598451   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:30.639891   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:30.721365   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:30.882883   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:31.204766   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:31.846602   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:32.481696   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:33.128530   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:35.690816   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:40.812356   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:48.744661   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/calico-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:48.751172   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/calico-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:48.762657   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/calico-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:48.784200   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/calico-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:48.825693   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/calico-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:48.907156   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/calico-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:49.068700   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/calico-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:49.390404   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/calico-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:50.032682   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/calico-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:51.054326   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:53:51.314829   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/calico-315000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-418372 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m11.573545374s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-418372 -n old-k8s-version-418372
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (132.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-982341 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0205 02:54:11.535606   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:29.722874   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/calico-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:32.608050   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:32.614469   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:32.625921   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:32.647390   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:32.688868   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:32.771117   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:32.932683   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:33.254245   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:33.895534   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:33.925996   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:35.177511   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:37.739681   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:42.861401   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:52.497818   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:54:53.103076   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-982341 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (45.155224669s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-982341 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bef61863-9e9a-440c-b4c1-bac22c0c3d20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bef61863-9e9a-440c-b4c1-bac22c0c3d20] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004278102s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-982341 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-982341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-982341 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-982341 --alsologtostderr -v=3
E0205 02:55:10.685106   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/calico-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:13.584775   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-982341 --alsologtostderr -v=3: (11.884629052s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-982341 -n default-k8s-diff-port-982341
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-982341 -n default-k8s-diff-port-982341: exit status 7 (77.596844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-982341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-982341 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0205 02:55:17.071692   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:17.078178   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:17.089663   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:17.111161   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:17.152995   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:17.234772   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:17.397081   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:17.718365   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:18.359930   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:19.641602   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:22.203842   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:27.325607   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-982341 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m22.10295118s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-982341 -n default-k8s-diff-port-982341
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-g6qld" [f759b217-791c-4956-95fa-9cf8f903bd5b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004293319s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-g6qld" [f759b217-791c-4956-95fa-9cf8f903bd5b] Running
E0205 02:55:36.830169   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/addons-217306/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:36.952884   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:36.959522   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:36.970985   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:36.992477   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:37.033891   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:37.116169   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:37.277789   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:37.567574   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:37.600075   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:38.241678   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:39.523919   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004349576s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-418372 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-418372 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-418372 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-418372 -n old-k8s-version-418372
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-418372 -n old-k8s-version-418372: exit status 2 (321.391843ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-418372 -n old-k8s-version-418372
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-418372 -n old-k8s-version-418372: exit status 2 (318.504417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-418372 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-418372 -n old-k8s-version-418372
E0205 02:55:42.085662   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-418372 -n old-k8s-version-418372
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-377237 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0205 02:55:47.207567   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:54.546830   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/custom-flannel-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:55.848319   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/auto-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:57.448879   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:55:58.048843   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/enable-default-cni-315000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-377237 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (26.858943399s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-377237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0205 02:56:14.419535   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kindnet-315000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-377237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.250179439s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-377237 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-377237 --alsologtostderr -v=3: (1.225571754s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-377237 -n newest-cni-377237
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-377237 -n newest-cni-377237: exit status 7 (73.640849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-377237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-377237 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0205 02:56:17.931170   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
E0205 02:56:21.878066   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/functional-150463/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-377237 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (12.876715213s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-377237 -n newest-cni-377237
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-377237 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-377237 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-377237 --alsologtostderr -v=1: (1.223404173s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-377237 -n newest-cni-377237
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-377237 -n newest-cni-377237: exit status 2 (313.890748ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-377237 -n newest-cni-377237
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-377237 -n newest-cni-377237: exit status 2 (309.63622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-377237 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-377237 -n newest-cni-377237
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-377237 -n newest-cni-377237
E0205 02:56:32.606820   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/calico-315000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-z87vw" [eb5a9d15-56ac-498b-b7ac-3c8e865616f8] Running
E0205 02:56:58.893051   19390 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/bridge-315000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003685509s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-z87vw" [eb5a9d15-56ac-498b-b7ac-3c8e865616f8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003791899s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-110205 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-9ctz6" [c1725a52-5489-480f-bfab-ced52c596812] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003971663s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-110205 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-110205 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-110205 -n no-preload-110205
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-110205 -n no-preload-110205: exit status 2 (304.846775ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-110205 -n no-preload-110205
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-110205 -n no-preload-110205: exit status 2 (302.782725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-110205 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-110205 -n no-preload-110205
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-110205 -n no-preload-110205
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-9ctz6" [c1725a52-5489-480f-bfab-ced52c596812] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003877863s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-678105 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-678105 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-678105 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-678105 -n embed-certs-678105
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-678105 -n embed-certs-678105: exit status 2 (301.051878ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-678105 -n embed-certs-678105
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-678105 -n embed-certs-678105: exit status 2 (301.343754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-678105 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-678105 -n embed-certs-678105
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-678105 -n embed-certs-678105
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5k6hc" [79c90120-68c1-480e-a779-808d2606be52] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00383507s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-5k6hc" [79c90120-68c1-480e-a779-808d2606be52] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003992428s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-982341 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-982341 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-982341 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-982341 -n default-k8s-diff-port-982341
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-982341 -n default-k8s-diff-port-982341: exit status 2 (295.046624ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-982341 -n default-k8s-diff-port-982341
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-982341 -n default-k8s-diff-port-982341: exit status 2 (299.9182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-982341 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-982341 -n default-k8s-diff-port-982341
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-982341 -n default-k8s-diff-port-982341
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.70s)

                                                
                                    

Test skip (27/324)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-217306 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-315000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-315000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-315000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-315000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-315000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-315000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-315000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-315000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-315000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-315000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-315000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-315000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-315000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-315000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:44:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-472446
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:46:11 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-925222
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:45:41 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-269786
contexts:
- context:
cluster: cert-expiration-472446
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:44:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-472446
name: cert-expiration-472446
- context:
cluster: kubernetes-upgrade-925222
user: kubernetes-upgrade-925222
name: kubernetes-upgrade-925222
- context:
cluster: missing-upgrade-269786
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:45:41 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-269786
name: missing-upgrade-269786
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-472446
user:
client-certificate: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/cert-expiration-472446/client.crt
client-key: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/cert-expiration-472446/client.key
- name: kubernetes-upgrade-925222
user:
client-certificate: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kubernetes-upgrade-925222/client.crt
client-key: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kubernetes-upgrade-925222/client.key
- name: missing-upgrade-269786
user:
client-certificate: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/missing-upgrade-269786/client.crt
client-key: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/missing-upgrade-269786/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-315000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-315000"

                                                
                                                
----------------------- debugLogs end: kubenet-315000 [took: 3.129537373s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-315000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-315000
--- SKIP: TestNetworkPlugins/group/kubenet (3.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-315000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-315000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:44:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-472446
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:46:11 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-925222
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20363-12617/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:45:41 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-269786
contexts:
- context:
cluster: cert-expiration-472446
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:44:06 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-472446
name: cert-expiration-472446
- context:
cluster: kubernetes-upgrade-925222
user: kubernetes-upgrade-925222
name: kubernetes-upgrade-925222
- context:
cluster: missing-upgrade-269786
extensions:
- extension:
last-update: Wed, 05 Feb 2025 02:45:41 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-269786
name: missing-upgrade-269786
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-472446
user:
client-certificate: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/cert-expiration-472446/client.crt
client-key: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/cert-expiration-472446/client.key
- name: kubernetes-upgrade-925222
user:
client-certificate: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kubernetes-upgrade-925222/client.crt
client-key: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/kubernetes-upgrade-925222/client.key
- name: missing-upgrade-269786
user:
client-certificate: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/missing-upgrade-269786/client.crt
client-key: /home/jenkins/minikube-integration/20363-12617/.minikube/profiles/missing-upgrade-269786/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-315000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-315000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-315000"

                                                
                                                
----------------------- debugLogs end: cilium-315000 [took: 3.583172492s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-315000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-315000
--- SKIP: TestNetworkPlugins/group/cilium (3.79s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-770156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-770156
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard