Test Report: Docker_Linux_crio 20534

                    
                      ca4340fb5ae0bb74f259779cd383137dc2ab446a:2025-04-14:39132
                    
                

Test fail (12/330)

x
+
TestAddons/parallel/Ingress (152.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-295301 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-295301 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-295301 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5c671251-9adb-41f3-813e-69f29e1fc47d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5c671251-9adb-41f3-813e-69f29e1fc47d] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003836939s
I0414 11:03:30.723232 1763595 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-295301 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.860194074s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-295301 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-295301
helpers_test.go:235: (dbg) docker inspect addons-295301:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1eaf183df2538f1858ab5406034938b27d2f5b666e91c080c2b6e45da45c8cd9",
	        "Created": "2025-04-14T11:01:04.306372147Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1765475,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-14T11:01:04.341530724Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fa6441117abd3f0ec72d78de011fb44ecb7b1e274ddcf240e39454ed1f98f388",
	        "ResolvConfPath": "/var/lib/docker/containers/1eaf183df2538f1858ab5406034938b27d2f5b666e91c080c2b6e45da45c8cd9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1eaf183df2538f1858ab5406034938b27d2f5b666e91c080c2b6e45da45c8cd9/hostname",
	        "HostsPath": "/var/lib/docker/containers/1eaf183df2538f1858ab5406034938b27d2f5b666e91c080c2b6e45da45c8cd9/hosts",
	        "LogPath": "/var/lib/docker/containers/1eaf183df2538f1858ab5406034938b27d2f5b666e91c080c2b6e45da45c8cd9/1eaf183df2538f1858ab5406034938b27d2f5b666e91c080c2b6e45da45c8cd9-json.log",
	        "Name": "/addons-295301",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-295301:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-295301",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1eaf183df2538f1858ab5406034938b27d2f5b666e91c080c2b6e45da45c8cd9",
	                "LowerDir": "/var/lib/docker/overlay2/34791c6f2ef1b8be0bb722f49daa69194c53d9f1564c292008fee6d174e54436-init/diff:/var/lib/docker/overlay2/c6d8bf10401ece8b3f73261aeb3a606dd205e8233950c57e244d9cccf977865e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/34791c6f2ef1b8be0bb722f49daa69194c53d9f1564c292008fee6d174e54436/merged",
	                "UpperDir": "/var/lib/docker/overlay2/34791c6f2ef1b8be0bb722f49daa69194c53d9f1564c292008fee6d174e54436/diff",
	                "WorkDir": "/var/lib/docker/overlay2/34791c6f2ef1b8be0bb722f49daa69194c53d9f1564c292008fee6d174e54436/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-295301",
	                "Source": "/var/lib/docker/volumes/addons-295301/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-295301",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-295301",
	                "name.minikube.sigs.k8s.io": "addons-295301",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "90d35da8fffe085b4888ce9527de78cb7ff3fe55aba56d0c4b693bb7411accf0",
	            "SandboxKey": "/var/run/docker/netns/90d35da8fffe",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-295301": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:c2:c9:91:fc:1f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b55fac861dc665ba97b1112dce403d84861754c1d1f5468d2bca8e7f435e0897",
	                    "EndpointID": "de1dea7f1f00ef5f641b6e21502650423674c96ae0612db324b774a733ff12b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-295301",
	                        "1eaf183df253"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-295301 -n addons-295301
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-295301 logs -n 25: (1.257409234s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-371317 | jenkins | v1.35.0 | 14 Apr 25 11:00 UTC |                     |
	|         | download-docker-371317                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-371317                                                                   | download-docker-371317 | jenkins | v1.35.0 | 14 Apr 25 11:00 UTC | 14 Apr 25 11:00 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-354346   | jenkins | v1.35.0 | 14 Apr 25 11:00 UTC |                     |
	|         | binary-mirror-354346                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43913                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-354346                                                                     | binary-mirror-354346   | jenkins | v1.35.0 | 14 Apr 25 11:00 UTC | 14 Apr 25 11:00 UTC |
	| addons  | disable dashboard -p                                                                        | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:00 UTC |                     |
	|         | addons-295301                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:00 UTC |                     |
	|         | addons-295301                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-295301 --wait=true                                                                | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:00 UTC | 14 Apr 25 11:02 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-295301 addons disable                                                                | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:02 UTC | 14 Apr 25 11:02 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-295301 addons disable                                                                | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:02 UTC | 14 Apr 25 11:03 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:03 UTC | 14 Apr 25 11:03 UTC |
	|         | -p addons-295301                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-295301 addons                                                                        | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:03 UTC | 14 Apr 25 11:03 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-295301 ssh cat                                                                       | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:03 UTC | 14 Apr 25 11:03 UTC |
	|         | /opt/local-path-provisioner/pvc-b3294b40-cb13-4826-81aa-9d006b235b14_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-295301 addons disable                                                                | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:03 UTC | 14 Apr 25 11:03 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-295301 addons disable                                                                | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:03 UTC | 14 Apr 25 11:03 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-295301 addons disable                                                                | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:03 UTC | 14 Apr 25 11:03 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-295301 addons disable                                                                | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:03 UTC | 14 Apr 25 11:03 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-295301 ip                                                                            | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:03 UTC | 14 Apr 25 11:03 UTC |
	| addons  | addons-295301 addons disable                                                                | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:03 UTC | 14 Apr 25 11:03 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-295301 addons                                                                        | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:03 UTC | 14 Apr 25 11:03 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-295301 addons                                                                        | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:03 UTC | 14 Apr 25 11:03 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-295301 addons                                                                        | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:03 UTC | 14 Apr 25 11:03 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-295301 ssh curl -s                                                                   | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-295301 addons                                                                        | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:04 UTC | 14 Apr 25 11:04 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-295301 addons                                                                        | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:04 UTC | 14 Apr 25 11:04 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-295301 ip                                                                            | addons-295301          | jenkins | v1.35.0 | 14 Apr 25 11:05 UTC | 14 Apr 25 11:05 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 11:00:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 11:00:39.591117 1764857 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:00:39.591391 1764857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:00:39.591400 1764857 out.go:358] Setting ErrFile to fd 2...
	I0414 11:00:39.591404 1764857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:00:39.591595 1764857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
	I0414 11:00:39.592232 1764857 out.go:352] Setting JSON to false
	I0414 11:00:39.593180 1764857 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":157388,"bootTime":1744471052,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:00:39.593305 1764857 start.go:139] virtualization: kvm guest
	I0414 11:00:39.595556 1764857 out.go:177] * [addons-295301] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 11:00:39.597246 1764857 notify.go:220] Checking for updates...
	I0414 11:00:39.597255 1764857 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 11:00:39.598847 1764857 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:00:39.600368 1764857 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig
	I0414 11:00:39.601731 1764857 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube
	I0414 11:00:39.603528 1764857 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 11:00:39.605156 1764857 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 11:00:39.606730 1764857 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:00:39.629066 1764857 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0414 11:00:39.629158 1764857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:00:39.681930 1764857 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-04-14 11:00:39.672911056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:00:39.682075 1764857 docker.go:318] overlay module found
	I0414 11:00:39.683967 1764857 out.go:177] * Using the docker driver based on user configuration
	I0414 11:00:39.685301 1764857 start.go:297] selected driver: docker
	I0414 11:00:39.685315 1764857 start.go:901] validating driver "docker" against <nil>
	I0414 11:00:39.685326 1764857 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 11:00:39.686118 1764857 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:00:39.740945 1764857 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-04-14 11:00:39.732061145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:00:39.741100 1764857 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 11:00:39.741322 1764857 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 11:00:39.743203 1764857 out.go:177] * Using Docker driver with root privileges
	I0414 11:00:39.744404 1764857 cni.go:84] Creating CNI manager for ""
	I0414 11:00:39.744484 1764857 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0414 11:00:39.744498 1764857 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0414 11:00:39.744578 1764857 start.go:340] cluster config:
	{Name:addons-295301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-295301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:00:39.746234 1764857 out.go:177] * Starting "addons-295301" primary control-plane node in "addons-295301" cluster
	I0414 11:00:39.747774 1764857 cache.go:121] Beginning downloading kic base image for docker with crio
	I0414 11:00:39.749510 1764857 out.go:177] * Pulling base image v0.0.46-1744107393-20604 ...
	I0414 11:00:39.750921 1764857 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 11:00:39.750983 1764857 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-1756784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0414 11:00:39.750994 1764857 cache.go:56] Caching tarball of preloaded images
	I0414 11:00:39.751075 1764857 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon
	I0414 11:00:39.751109 1764857 preload.go:172] Found /home/jenkins/minikube-integration/20534-1756784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0414 11:00:39.751121 1764857 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0414 11:00:39.751490 1764857 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/config.json ...
	I0414 11:00:39.751524 1764857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/config.json: {Name:mkbdfe2591b8be264e4575e95514931694248f07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:00:39.767494 1764857 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a to local cache
	I0414 11:00:39.767662 1764857 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local cache directory
	I0414 11:00:39.767686 1764857 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local cache directory, skipping pull
	I0414 11:00:39.767694 1764857 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a exists in cache, skipping pull
	I0414 11:00:39.767705 1764857 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a as a tarball
	I0414 11:00:39.767739 1764857 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a from local cache
	I0414 11:00:52.239095 1764857 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a from cached tarball
	I0414 11:00:52.239137 1764857 cache.go:230] Successfully downloaded all kic artifacts
	I0414 11:00:52.239183 1764857 start.go:360] acquireMachinesLock for addons-295301: {Name:mk8298125d180057a0a877335cdc534f9b70beb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0414 11:00:52.239294 1764857 start.go:364] duration metric: took 89.674µs to acquireMachinesLock for "addons-295301"
	I0414 11:00:52.239319 1764857 start.go:93] Provisioning new machine with config: &{Name:addons-295301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-295301 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 11:00:52.239397 1764857 start.go:125] createHost starting for "" (driver="docker")
	I0414 11:00:52.241419 1764857 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0414 11:00:52.241684 1764857 start.go:159] libmachine.API.Create for "addons-295301" (driver="docker")
	I0414 11:00:52.241718 1764857 client.go:168] LocalClient.Create starting
	I0414 11:00:52.241839 1764857 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20534-1756784/.minikube/certs/ca.pem
	I0414 11:00:52.416616 1764857 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20534-1756784/.minikube/certs/cert.pem
	I0414 11:00:52.568566 1764857 cli_runner.go:164] Run: docker network inspect addons-295301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0414 11:00:52.586241 1764857 cli_runner.go:211] docker network inspect addons-295301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0414 11:00:52.586334 1764857 network_create.go:284] running [docker network inspect addons-295301] to gather additional debugging logs...
	I0414 11:00:52.586360 1764857 cli_runner.go:164] Run: docker network inspect addons-295301
	W0414 11:00:52.605082 1764857 cli_runner.go:211] docker network inspect addons-295301 returned with exit code 1
	I0414 11:00:52.605127 1764857 network_create.go:287] error running [docker network inspect addons-295301]: docker network inspect addons-295301: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-295301 not found
	I0414 11:00:52.605143 1764857 network_create.go:289] output of [docker network inspect addons-295301]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-295301 not found
	
	** /stderr **
	I0414 11:00:52.605284 1764857 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0414 11:00:52.623901 1764857 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001deea20}
	I0414 11:00:52.623957 1764857 network_create.go:124] attempt to create docker network addons-295301 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0414 11:00:52.624006 1764857 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-295301 addons-295301
	I0414 11:00:52.678302 1764857 network_create.go:108] docker network addons-295301 192.168.49.0/24 created
	I0414 11:00:52.678344 1764857 kic.go:121] calculated static IP "192.168.49.2" for the "addons-295301" container
	I0414 11:00:52.678437 1764857 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0414 11:00:52.696121 1764857 cli_runner.go:164] Run: docker volume create addons-295301 --label name.minikube.sigs.k8s.io=addons-295301 --label created_by.minikube.sigs.k8s.io=true
	I0414 11:00:52.716505 1764857 oci.go:103] Successfully created a docker volume addons-295301
	I0414 11:00:52.716595 1764857 cli_runner.go:164] Run: docker run --rm --name addons-295301-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-295301 --entrypoint /usr/bin/test -v addons-295301:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -d /var/lib
	I0414 11:00:59.519388 1764857 cli_runner.go:217] Completed: docker run --rm --name addons-295301-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-295301 --entrypoint /usr/bin/test -v addons-295301:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -d /var/lib: (6.802736254s)
	I0414 11:00:59.519423 1764857 oci.go:107] Successfully prepared a docker volume addons-295301
	I0414 11:00:59.519460 1764857 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 11:00:59.519491 1764857 kic.go:194] Starting extracting preloaded images to volume ...
	I0414 11:00:59.519554 1764857 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20534-1756784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-295301:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -I lz4 -xf /preloaded.tar -C /extractDir
	I0414 11:01:04.237865 1764857 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20534-1756784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-295301:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -I lz4 -xf /preloaded.tar -C /extractDir: (4.718229811s)
	I0414 11:01:04.237904 1764857 kic.go:203] duration metric: took 4.718409184s to extract preloaded images to volume ...
	W0414 11:01:04.238100 1764857 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0414 11:01:04.238240 1764857 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0414 11:01:04.290095 1764857 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-295301 --name addons-295301 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-295301 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-295301 --network addons-295301 --ip 192.168.49.2 --volume addons-295301:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a
	I0414 11:01:04.599019 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Running}}
	I0414 11:01:04.618185 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:04.638588 1764857 cli_runner.go:164] Run: docker exec addons-295301 stat /var/lib/dpkg/alternatives/iptables
	I0414 11:01:04.682069 1764857 oci.go:144] the created container "addons-295301" has a running status.
	I0414 11:01:04.682110 1764857 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa...
	I0414 11:01:05.019199 1764857 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0414 11:01:05.088493 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:05.114217 1764857 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0414 11:01:05.114243 1764857 kic_runner.go:114] Args: [docker exec --privileged addons-295301 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0414 11:01:05.159285 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:05.178873 1764857 machine.go:93] provisionDockerMachine start ...
	I0414 11:01:05.179005 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:05.203819 1764857 main.go:141] libmachine: Using SSH client type: native
	I0414 11:01:05.204143 1764857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0414 11:01:05.204160 1764857 main.go:141] libmachine: About to run SSH command:
	hostname
	I0414 11:01:05.328347 1764857 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-295301
	
	I0414 11:01:05.328401 1764857 ubuntu.go:169] provisioning hostname "addons-295301"
	I0414 11:01:05.328466 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:05.348970 1764857 main.go:141] libmachine: Using SSH client type: native
	I0414 11:01:05.349242 1764857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0414 11:01:05.349264 1764857 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-295301 && echo "addons-295301" | sudo tee /etc/hostname
	I0414 11:01:05.485367 1764857 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-295301
	
	I0414 11:01:05.485452 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:05.503540 1764857 main.go:141] libmachine: Using SSH client type: native
	I0414 11:01:05.503783 1764857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0414 11:01:05.503802 1764857 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-295301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-295301/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-295301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0414 11:01:05.625247 1764857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0414 11:01:05.625299 1764857 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20534-1756784/.minikube CaCertPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20534-1756784/.minikube}
	I0414 11:01:05.625330 1764857 ubuntu.go:177] setting up certificates
	I0414 11:01:05.625352 1764857 provision.go:84] configureAuth start
	I0414 11:01:05.625418 1764857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-295301
	I0414 11:01:05.643444 1764857 provision.go:143] copyHostCerts
	I0414 11:01:05.643526 1764857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-1756784/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20534-1756784/.minikube/ca.pem (1082 bytes)
	I0414 11:01:05.643660 1764857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-1756784/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20534-1756784/.minikube/cert.pem (1123 bytes)
	I0414 11:01:05.643731 1764857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-1756784/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20534-1756784/.minikube/key.pem (1675 bytes)
	I0414 11:01:05.643783 1764857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20534-1756784/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20534-1756784/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20534-1756784/.minikube/certs/ca-key.pem org=jenkins.addons-295301 san=[127.0.0.1 192.168.49.2 addons-295301 localhost minikube]
	I0414 11:01:05.994225 1764857 provision.go:177] copyRemoteCerts
	I0414 11:01:05.994292 1764857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0414 11:01:05.994330 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:06.013525 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:06.102097 1764857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-1756784/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0414 11:01:06.125538 1764857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-1756784/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0414 11:01:06.148802 1764857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-1756784/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0414 11:01:06.171598 1764857 provision.go:87] duration metric: took 546.228491ms to configureAuth
	I0414 11:01:06.171624 1764857 ubuntu.go:193] setting minikube options for container-runtime
	I0414 11:01:06.171881 1764857 config.go:182] Loaded profile config "addons-295301": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:01:06.172020 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:06.191978 1764857 main.go:141] libmachine: Using SSH client type: native
	I0414 11:01:06.192197 1764857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0414 11:01:06.192215 1764857 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0414 11:01:06.407376 1764857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0414 11:01:06.407414 1764857 machine.go:96] duration metric: took 1.228512933s to provisionDockerMachine
	I0414 11:01:06.407431 1764857 client.go:171] duration metric: took 14.165704232s to LocalClient.Create
	I0414 11:01:06.407460 1764857 start.go:167] duration metric: took 14.165776511s to libmachine.API.Create "addons-295301"
	I0414 11:01:06.407477 1764857 start.go:293] postStartSetup for "addons-295301" (driver="docker")
	I0414 11:01:06.407495 1764857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0414 11:01:06.407572 1764857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0414 11:01:06.407624 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:06.426449 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:06.522148 1764857 ssh_runner.go:195] Run: cat /etc/os-release
	I0414 11:01:06.525513 1764857 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0414 11:01:06.525558 1764857 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0414 11:01:06.525571 1764857 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0414 11:01:06.525579 1764857 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0414 11:01:06.525591 1764857 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-1756784/.minikube/addons for local assets ...
	I0414 11:01:06.525655 1764857 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-1756784/.minikube/files for local assets ...
	I0414 11:01:06.525682 1764857 start.go:296] duration metric: took 118.191774ms for postStartSetup
	I0414 11:01:06.525990 1764857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-295301
	I0414 11:01:06.543565 1764857 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/config.json ...
	I0414 11:01:06.543838 1764857 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 11:01:06.543904 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:06.563152 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:06.649740 1764857 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0414 11:01:06.654573 1764857 start.go:128] duration metric: took 14.415153807s to createHost
	I0414 11:01:06.654606 1764857 start.go:83] releasing machines lock for "addons-295301", held for 14.415299782s
	I0414 11:01:06.654694 1764857 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-295301
	I0414 11:01:06.673169 1764857 ssh_runner.go:195] Run: cat /version.json
	I0414 11:01:06.673221 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:06.673258 1764857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0414 11:01:06.673327 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:06.691416 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:06.691684 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:06.847540 1764857 ssh_runner.go:195] Run: systemctl --version
	I0414 11:01:06.852024 1764857 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0414 11:01:06.992315 1764857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0414 11:01:06.996834 1764857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 11:01:07.016074 1764857 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0414 11:01:07.016164 1764857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0414 11:01:07.044998 1764857 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0414 11:01:07.045024 1764857 start.go:495] detecting cgroup driver to use...
	I0414 11:01:07.045059 1764857 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0414 11:01:07.045104 1764857 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0414 11:01:07.060907 1764857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0414 11:01:07.071602 1764857 docker.go:217] disabling cri-docker service (if available) ...
	I0414 11:01:07.071663 1764857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0414 11:01:07.084903 1764857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0414 11:01:07.098326 1764857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0414 11:01:07.178068 1764857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0414 11:01:07.266798 1764857 docker.go:233] disabling docker service ...
	I0414 11:01:07.266879 1764857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0414 11:01:07.286404 1764857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0414 11:01:07.297907 1764857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0414 11:01:07.377205 1764857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0414 11:01:07.461290 1764857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0414 11:01:07.472439 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0414 11:01:07.489334 1764857 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0414 11:01:07.489400 1764857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:01:07.498798 1764857 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0414 11:01:07.498867 1764857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:01:07.508434 1764857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:01:07.518016 1764857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:01:07.527912 1764857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0414 11:01:07.536808 1764857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:01:07.546470 1764857 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:01:07.561610 1764857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0414 11:01:07.571231 1764857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0414 11:01:07.580366 1764857 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0414 11:01:07.580455 1764857 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0414 11:01:07.595330 1764857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0414 11:01:07.604504 1764857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 11:01:07.683898 1764857 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0414 11:01:07.796487 1764857 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0414 11:01:07.796579 1764857 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0414 11:01:07.800149 1764857 start.go:563] Will wait 60s for crictl version
	I0414 11:01:07.800208 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:01:07.803616 1764857 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0414 11:01:07.839955 1764857 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0414 11:01:07.840052 1764857 ssh_runner.go:195] Run: crio --version
	I0414 11:01:07.876364 1764857 ssh_runner.go:195] Run: crio --version
	I0414 11:01:07.915388 1764857 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0414 11:01:07.916847 1764857 cli_runner.go:164] Run: docker network inspect addons-295301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0414 11:01:07.935198 1764857 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0414 11:01:07.939015 1764857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 11:01:07.950320 1764857 kubeadm.go:883] updating cluster {Name:addons-295301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-295301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0414 11:01:07.950465 1764857 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0414 11:01:07.950528 1764857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 11:01:08.019500 1764857 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 11:01:08.019521 1764857 crio.go:433] Images already preloaded, skipping extraction
	I0414 11:01:08.019570 1764857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0414 11:01:08.053129 1764857 crio.go:514] all images are preloaded for cri-o runtime.
	I0414 11:01:08.053157 1764857 cache_images.go:84] Images are preloaded, skipping loading
	I0414 11:01:08.053166 1764857 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.2 crio true true} ...
	I0414 11:01:08.053257 1764857 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-295301 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-295301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0414 11:01:08.053328 1764857 ssh_runner.go:195] Run: crio config
	I0414 11:01:08.097354 1764857 cni.go:84] Creating CNI manager for ""
	I0414 11:01:08.097386 1764857 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0414 11:01:08.097400 1764857 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0414 11:01:08.097426 1764857 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-295301 NodeName:addons-295301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0414 11:01:08.097580 1764857 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-295301"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0414 11:01:08.097665 1764857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0414 11:01:08.106838 1764857 binaries.go:44] Found k8s binaries, skipping transfer
	I0414 11:01:08.106916 1764857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0414 11:01:08.116174 1764857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0414 11:01:08.133546 1764857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0414 11:01:08.151318 1764857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0414 11:01:08.169476 1764857 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0414 11:01:08.173624 1764857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0414 11:01:08.185293 1764857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 11:01:08.261323 1764857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 11:01:08.274677 1764857 certs.go:68] Setting up /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301 for IP: 192.168.49.2
	I0414 11:01:08.274711 1764857 certs.go:194] generating shared ca certs ...
	I0414 11:01:08.274740 1764857 certs.go:226] acquiring lock for ca certs: {Name:mkb8d18b4854e149a23c3d8ca993095a76becfa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:01:08.274928 1764857 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20534-1756784/.minikube/ca.key
	I0414 11:01:08.594394 1764857 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-1756784/.minikube/ca.crt ...
	I0414 11:01:08.594432 1764857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-1756784/.minikube/ca.crt: {Name:mkdec31565c430950635dd2a3dadfb98f66a2446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:01:08.594665 1764857 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-1756784/.minikube/ca.key ...
	I0414 11:01:08.594682 1764857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-1756784/.minikube/ca.key: {Name:mk54ef97a2ef8c58654dd42e0c26cb84a915538a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:01:08.594820 1764857 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20534-1756784/.minikube/proxy-client-ca.key
	I0414 11:01:08.673366 1764857 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-1756784/.minikube/proxy-client-ca.crt ...
	I0414 11:01:08.673403 1764857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-1756784/.minikube/proxy-client-ca.crt: {Name:mk0773006ea596ab907b0b06f5f5383e1c501f83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:01:08.673611 1764857 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-1756784/.minikube/proxy-client-ca.key ...
	I0414 11:01:08.673627 1764857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-1756784/.minikube/proxy-client-ca.key: {Name:mk8d3220c5d924cd8dd24a1c9a80aa1b94604c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:01:08.673735 1764857 certs.go:256] generating profile certs ...
	I0414 11:01:08.673823 1764857 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.key
	I0414 11:01:08.673868 1764857 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt with IP's: []
	I0414 11:01:08.857648 1764857 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt ...
	I0414 11:01:08.857690 1764857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: {Name:mka8250f9cf46cf05f638b70f43cd0930647f09c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:01:08.857895 1764857 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.key ...
	I0414 11:01:08.857911 1764857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.key: {Name:mk0ea431f43814f7ccde2ed235f209f296de0026 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:01:08.858018 1764857 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/apiserver.key.c4e03fab
	I0414 11:01:08.858041 1764857 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/apiserver.crt.c4e03fab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0414 11:01:09.331582 1764857 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/apiserver.crt.c4e03fab ...
	I0414 11:01:09.331624 1764857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/apiserver.crt.c4e03fab: {Name:mk5d670f0e06a3e21df1cbe5e451d8abd007e4e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:01:09.331826 1764857 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/apiserver.key.c4e03fab ...
	I0414 11:01:09.331850 1764857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/apiserver.key.c4e03fab: {Name:mk41a11717f350fdf7ef1755bf3ef8d6540a7709 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:01:09.331960 1764857 certs.go:381] copying /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/apiserver.crt.c4e03fab -> /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/apiserver.crt
	I0414 11:01:09.332060 1764857 certs.go:385] copying /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/apiserver.key.c4e03fab -> /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/apiserver.key
	I0414 11:01:09.332124 1764857 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/proxy-client.key
	I0414 11:01:09.332150 1764857 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/proxy-client.crt with IP's: []
	I0414 11:01:09.730779 1764857 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/proxy-client.crt ...
	I0414 11:01:09.730815 1764857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/proxy-client.crt: {Name:mk631eb52442b7c342639cf8bab586f3918a8a1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:01:09.731030 1764857 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/proxy-client.key ...
	I0414 11:01:09.731052 1764857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/proxy-client.key: {Name:mka2c55842ac18690c561cfe813fb8f982e2313f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:01:09.731305 1764857 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-1756784/.minikube/certs/ca-key.pem (1679 bytes)
	I0414 11:01:09.731356 1764857 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-1756784/.minikube/certs/ca.pem (1082 bytes)
	I0414 11:01:09.731434 1764857 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-1756784/.minikube/certs/cert.pem (1123 bytes)
	I0414 11:01:09.731483 1764857 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-1756784/.minikube/certs/key.pem (1675 bytes)
	I0414 11:01:09.732153 1764857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-1756784/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0414 11:01:09.757033 1764857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-1756784/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0414 11:01:09.780431 1764857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-1756784/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0414 11:01:09.804268 1764857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-1756784/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0414 11:01:09.828181 1764857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0414 11:01:09.851771 1764857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0414 11:01:09.874707 1764857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0414 11:01:09.898529 1764857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0414 11:01:09.921599 1764857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-1756784/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0414 11:01:09.944900 1764857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0414 11:01:09.961733 1764857 ssh_runner.go:195] Run: openssl version
	I0414 11:01:09.967365 1764857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0414 11:01:09.977157 1764857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:01:09.980589 1764857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 11:01 /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:01:09.980654 1764857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0414 11:01:09.987346 1764857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0414 11:01:09.996453 1764857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0414 11:01:09.999773 1764857 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0414 11:01:09.999830 1764857 kubeadm.go:392] StartCluster: {Name:addons-295301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-295301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:01:09.999924 1764857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0414 11:01:09.999991 1764857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0414 11:01:10.035108 1764857 cri.go:89] found id: ""
	I0414 11:01:10.035177 1764857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0414 11:01:10.043807 1764857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0414 11:01:10.052435 1764857 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0414 11:01:10.052512 1764857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0414 11:01:10.061097 1764857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0414 11:01:10.061119 1764857 kubeadm.go:157] found existing configuration files:
	
	I0414 11:01:10.061172 1764857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0414 11:01:10.069840 1764857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0414 11:01:10.069906 1764857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0414 11:01:10.077903 1764857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0414 11:01:10.086440 1764857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0414 11:01:10.086504 1764857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0414 11:01:10.094721 1764857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0414 11:01:10.103161 1764857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0414 11:01:10.103246 1764857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0414 11:01:10.111376 1764857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0414 11:01:10.119721 1764857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0414 11:01:10.119792 1764857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0414 11:01:10.127896 1764857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0414 11:01:10.165634 1764857 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0414 11:01:10.165693 1764857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0414 11:01:10.183254 1764857 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0414 11:01:10.183370 1764857 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0414 11:01:10.183419 1764857 kubeadm.go:310] OS: Linux
	I0414 11:01:10.183486 1764857 kubeadm.go:310] CGROUPS_CPU: enabled
	I0414 11:01:10.183549 1764857 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0414 11:01:10.183655 1764857 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0414 11:01:10.183759 1764857 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0414 11:01:10.183861 1764857 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0414 11:01:10.183960 1764857 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0414 11:01:10.184047 1764857 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0414 11:01:10.184118 1764857 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0414 11:01:10.184194 1764857 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0414 11:01:10.239149 1764857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0414 11:01:10.239308 1764857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0414 11:01:10.239452 1764857 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0414 11:01:10.245771 1764857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0414 11:01:10.248077 1764857 out.go:235]   - Generating certificates and keys ...
	I0414 11:01:10.248191 1764857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0414 11:01:10.248304 1764857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0414 11:01:10.304310 1764857 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0414 11:01:10.483792 1764857 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0414 11:01:10.597125 1764857 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0414 11:01:10.827665 1764857 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0414 11:01:10.884926 1764857 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0414 11:01:10.885078 1764857 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-295301 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0414 11:01:11.089733 1764857 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0414 11:01:11.089935 1764857 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-295301 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0414 11:01:11.288649 1764857 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0414 11:01:11.485563 1764857 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0414 11:01:11.723396 1764857 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0414 11:01:11.723546 1764857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0414 11:01:11.925412 1764857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0414 11:01:12.001865 1764857 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0414 11:01:12.183211 1764857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0414 11:01:12.311730 1764857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0414 11:01:12.402130 1764857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0414 11:01:12.402600 1764857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0414 11:01:12.406432 1764857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0414 11:01:12.409020 1764857 out.go:235]   - Booting up control plane ...
	I0414 11:01:12.409148 1764857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0414 11:01:12.409216 1764857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0414 11:01:12.409270 1764857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0414 11:01:12.418031 1764857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0414 11:01:12.423532 1764857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0414 11:01:12.423624 1764857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0414 11:01:12.507012 1764857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0414 11:01:12.507181 1764857 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0414 11:01:13.009144 1764857 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.147623ms
	I0414 11:01:13.009292 1764857 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0414 11:01:17.510487 1764857 kubeadm.go:310] [api-check] The API server is healthy after 4.501421751s
	I0414 11:01:17.523400 1764857 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0414 11:01:17.537347 1764857 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0414 11:01:17.557474 1764857 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0414 11:01:17.557754 1764857 kubeadm.go:310] [mark-control-plane] Marking the node addons-295301 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0414 11:01:17.566139 1764857 kubeadm.go:310] [bootstrap-token] Using token: ymai8h.ektgi8gz6k6sodsk
	I0414 11:01:17.567842 1764857 out.go:235]   - Configuring RBAC rules ...
	I0414 11:01:17.568002 1764857 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0414 11:01:17.571533 1764857 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0414 11:01:17.577416 1764857 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0414 11:01:17.580124 1764857 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0414 11:01:17.584122 1764857 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0414 11:01:17.586824 1764857 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0414 11:01:17.917006 1764857 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0414 11:01:18.335142 1764857 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0414 11:01:18.916712 1764857 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0414 11:01:18.918815 1764857 kubeadm.go:310] 
	I0414 11:01:18.918942 1764857 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0414 11:01:18.918967 1764857 kubeadm.go:310] 
	I0414 11:01:18.919079 1764857 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0414 11:01:18.919100 1764857 kubeadm.go:310] 
	I0414 11:01:18.919131 1764857 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0414 11:01:18.919211 1764857 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0414 11:01:18.919279 1764857 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0414 11:01:18.919288 1764857 kubeadm.go:310] 
	I0414 11:01:18.919355 1764857 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0414 11:01:18.919364 1764857 kubeadm.go:310] 
	I0414 11:01:18.919434 1764857 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0414 11:01:18.919443 1764857 kubeadm.go:310] 
	I0414 11:01:18.919538 1764857 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0414 11:01:18.919645 1764857 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0414 11:01:18.919739 1764857 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0414 11:01:18.919748 1764857 kubeadm.go:310] 
	I0414 11:01:18.919858 1764857 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0414 11:01:18.919967 1764857 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0414 11:01:18.919977 1764857 kubeadm.go:310] 
	I0414 11:01:18.920105 1764857 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ymai8h.ektgi8gz6k6sodsk \
	I0414 11:01:18.920268 1764857 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3306361d8f8e1efeb7a2e75e6808d4dda12e0a65eb7cfe66d070f7588bdaea82 \
	I0414 11:01:18.920310 1764857 kubeadm.go:310] 	--control-plane 
	I0414 11:01:18.920320 1764857 kubeadm.go:310] 
	I0414 11:01:18.920456 1764857 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0414 11:01:18.920467 1764857 kubeadm.go:310] 
	I0414 11:01:18.920598 1764857 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ymai8h.ektgi8gz6k6sodsk \
	I0414 11:01:18.920759 1764857 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3306361d8f8e1efeb7a2e75e6808d4dda12e0a65eb7cfe66d070f7588bdaea82 
	I0414 11:01:18.923054 1764857 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0414 11:01:18.923291 1764857 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0414 11:01:18.923410 1764857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0414 11:01:18.923439 1764857 cni.go:84] Creating CNI manager for ""
	I0414 11:01:18.923449 1764857 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0414 11:01:18.925728 1764857 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0414 11:01:18.927240 1764857 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0414 11:01:18.931298 1764857 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0414 11:01:18.931323 1764857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0414 11:01:18.949303 1764857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0414 11:01:19.156052 1764857 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0414 11:01:19.156176 1764857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 11:01:19.156207 1764857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-295301 minikube.k8s.io/updated_at=2025_04_14T11_01_19_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=43cb59e6a4e9845c84b0379fb52045b7420d26a4 minikube.k8s.io/name=addons-295301 minikube.k8s.io/primary=true
	I0414 11:01:19.228681 1764857 ops.go:34] apiserver oom_adj: -16
	I0414 11:01:19.228701 1764857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 11:01:19.729424 1764857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 11:01:20.229651 1764857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 11:01:20.729702 1764857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 11:01:21.229233 1764857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 11:01:21.728834 1764857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 11:01:22.229637 1764857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 11:01:22.729627 1764857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 11:01:23.229498 1764857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0414 11:01:23.298010 1764857 kubeadm.go:1113] duration metric: took 4.141901548s to wait for elevateKubeSystemPrivileges
	I0414 11:01:23.298049 1764857 kubeadm.go:394] duration metric: took 13.298223441s to StartCluster
	I0414 11:01:23.298073 1764857 settings.go:142] acquiring lock: {Name:mk6a735f322e7f96ec9f65c7a1f33dfdf1f4261d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:01:23.298208 1764857 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20534-1756784/kubeconfig
	I0414 11:01:23.298661 1764857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-1756784/kubeconfig: {Name:mk81528aab280defac9e292d7b7806d6cd07ea90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0414 11:01:23.298909 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0414 11:01:23.298913 1764857 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0414 11:01:23.298931 1764857 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0414 11:01:23.299067 1764857 addons.go:69] Setting cloud-spanner=true in profile "addons-295301"
	I0414 11:01:23.299085 1764857 addons.go:69] Setting default-storageclass=true in profile "addons-295301"
	I0414 11:01:23.299102 1764857 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-295301"
	I0414 11:01:23.299123 1764857 addons.go:238] Setting addon cloud-spanner=true in "addons-295301"
	I0414 11:01:23.299176 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.299190 1764857 config.go:182] Loaded profile config "addons-295301": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:01:23.299180 1764857 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-295301"
	I0414 11:01:23.299234 1764857 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-295301"
	I0414 11:01:23.299246 1764857 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-295301"
	I0414 11:01:23.299258 1764857 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-295301"
	I0414 11:01:23.299290 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.299292 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.299509 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.299701 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.299747 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.299779 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.299756 1764857 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-295301"
	I0414 11:01:23.299825 1764857 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-295301"
	I0414 11:01:23.299983 1764857 addons.go:69] Setting registry=true in profile "addons-295301"
	I0414 11:01:23.300037 1764857 addons.go:238] Setting addon registry=true in "addons-295301"
	I0414 11:01:23.300097 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.300219 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.300718 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.300761 1764857 addons.go:69] Setting volumesnapshots=true in profile "addons-295301"
	I0414 11:01:23.300782 1764857 addons.go:238] Setting addon volumesnapshots=true in "addons-295301"
	I0414 11:01:23.300809 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.300718 1764857 addons.go:69] Setting volcano=true in profile "addons-295301"
	I0414 11:01:23.301306 1764857 addons.go:238] Setting addon volcano=true in "addons-295301"
	I0414 11:01:23.301357 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.301367 1764857 addons.go:69] Setting inspektor-gadget=true in profile "addons-295301"
	I0414 11:01:23.301545 1764857 addons.go:238] Setting addon inspektor-gadget=true in "addons-295301"
	I0414 11:01:23.301597 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.301934 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.299074 1764857 addons.go:69] Setting yakd=true in profile "addons-295301"
	I0414 11:01:23.312021 1764857 addons.go:238] Setting addon yakd=true in "addons-295301"
	I0414 11:01:23.312093 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.312166 1764857 addons.go:69] Setting metrics-server=true in profile "addons-295301"
	I0414 11:01:23.312198 1764857 addons.go:238] Setting addon metrics-server=true in "addons-295301"
	I0414 11:01:23.312234 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.312815 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.312877 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.316997 1764857 addons.go:69] Setting storage-provisioner=true in profile "addons-295301"
	I0414 11:01:23.317036 1764857 addons.go:238] Setting addon storage-provisioner=true in "addons-295301"
	I0414 11:01:23.317081 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.317743 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.302354 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.302371 1764857 addons.go:69] Setting gcp-auth=true in profile "addons-295301"
	I0414 11:01:23.320048 1764857 mustload.go:65] Loading cluster: addons-295301
	I0414 11:01:23.302409 1764857 addons.go:69] Setting ingress=true in profile "addons-295301"
	I0414 11:01:23.320405 1764857 addons.go:238] Setting addon ingress=true in "addons-295301"
	I0414 11:01:23.320495 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.302421 1764857 addons.go:69] Setting ingress-dns=true in profile "addons-295301"
	I0414 11:01:23.321083 1764857 addons.go:238] Setting addon ingress-dns=true in "addons-295301"
	I0414 11:01:23.321280 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.322077 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.302434 1764857 out.go:177] * Verifying Kubernetes components...
	I0414 11:01:23.309998 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.299081 1764857 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-295301"
	I0414 11:01:23.324188 1764857 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-295301"
	I0414 11:01:23.324331 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.326704 1764857 config.go:182] Loaded profile config "addons-295301": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:01:23.326594 1764857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0414 11:01:23.329387 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.335937 1764857 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.31
	I0414 11:01:23.338232 1764857 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0414 11:01:23.338264 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0414 11:01:23.338333 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:23.347638 1764857 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0414 11:01:23.349186 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.350121 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.352516 1764857 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0414 11:01:23.353958 1764857 out.go:177]   - Using image docker.io/registry:2.8.3
	I0414 11:01:23.355441 1764857 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0414 11:01:23.355462 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0414 11:01:23.355529 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:23.356687 1764857 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0414 11:01:23.356707 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0414 11:01:23.356769 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:23.366669 1764857 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0414 11:01:23.370721 1764857 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.1
	I0414 11:01:23.370842 1764857 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0414 11:01:23.370862 1764857 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0414 11:01:23.370951 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:23.372520 1764857 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0414 11:01:23.372547 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0414 11:01:23.372608 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:23.374957 1764857 addons.go:238] Setting addon default-storageclass=true in "addons-295301"
	I0414 11:01:23.375029 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.375802 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	W0414 11:01:23.389043 1764857 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0414 11:01:23.391191 1764857 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0414 11:01:23.391546 1764857 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-295301"
	I0414 11:01:23.391598 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.392138 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:23.393780 1764857 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 11:01:23.395144 1764857 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 11:01:23.396810 1764857 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0414 11:01:23.396838 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0414 11:01:23.396907 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:23.401011 1764857 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0414 11:01:23.403222 1764857 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 11:01:23.403250 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0414 11:01:23.403324 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:23.406430 1764857 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0414 11:01:23.407797 1764857 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0414 11:01:23.407824 1764857 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0414 11:01:23.407903 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:23.413290 1764857 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0414 11:01:23.413664 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:23.417636 1764857 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0414 11:01:23.417663 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0414 11:01:23.417734 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:23.425242 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:23.443679 1764857 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0414 11:01:23.443822 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:23.444874 1764857 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0414 11:01:23.444895 1764857 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0414 11:01:23.444971 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:23.450366 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:23.451905 1764857 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0414 11:01:23.451977 1764857 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0414 11:01:23.452058 1764857 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0414 11:01:23.452115 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:23.453398 1764857 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0414 11:01:23.453438 1764857 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0414 11:01:23.453488 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:23.458961 1764857 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0414 11:01:23.459205 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:23.460703 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:23.461676 1764857 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0414 11:01:23.462999 1764857 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0414 11:01:23.464354 1764857 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0414 11:01:23.465543 1764857 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0414 11:01:23.466685 1764857 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0414 11:01:23.467004 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:23.468888 1764857 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0414 11:01:23.469741 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:23.470926 1764857 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0414 11:01:23.470990 1764857 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0414 11:01:23.472026 1764857 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0414 11:01:23.472049 1764857 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0414 11:01:23.472120 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:23.473646 1764857 out.go:177]   - Using image docker.io/busybox:stable
	I0414 11:01:23.474893 1764857 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0414 11:01:23.474928 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0414 11:01:23.474994 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:23.516579 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:23.516579 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:23.516579 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:23.520579 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:23.522427 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:23.522724 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:23.533149 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	W0414 11:01:23.586394 1764857 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0414 11:01:23.586439 1764857 retry.go:31] will retry after 355.479863ms: ssh: handshake failed: EOF
	W0414 11:01:23.586720 1764857 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0414 11:01:23.586742 1764857 retry.go:31] will retry after 213.551659ms: ssh: handshake failed: EOF
	I0414 11:01:23.708242 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0414 11:01:23.791178 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0414 11:01:23.887448 1764857 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0414 11:01:23.887481 1764857 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0414 11:01:23.900895 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0414 11:01:23.904317 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0414 11:01:24.000779 1764857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0414 11:01:24.090168 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0414 11:01:24.104262 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0414 11:01:24.181589 1764857 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0414 11:01:24.181703 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0414 11:01:24.182955 1764857 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0414 11:01:24.183034 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0414 11:01:24.184704 1764857 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0414 11:01:24.184776 1764857 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0414 11:01:24.190235 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0414 11:01:24.195351 1764857 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0414 11:01:24.195403 1764857 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0414 11:01:24.285748 1764857 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0414 11:01:24.285783 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0414 11:01:24.285964 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0414 11:01:24.309748 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0414 11:01:24.482586 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0414 11:01:24.484210 1764857 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0414 11:01:24.484286 1764857 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0414 11:01:24.485825 1764857 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0414 11:01:24.485861 1764857 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0414 11:01:24.490430 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0414 11:01:24.585773 1764857 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0414 11:01:24.585883 1764857 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0414 11:01:24.883954 1764857 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0414 11:01:24.884058 1764857 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0414 11:01:24.898080 1764857 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 11:01:24.898112 1764857 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0414 11:01:24.989493 1764857 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0414 11:01:24.989588 1764857 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0414 11:01:24.997214 1764857 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0414 11:01:24.997253 1764857 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0414 11:01:25.281282 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0414 11:01:25.296153 1764857 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0414 11:01:25.296200 1764857 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0414 11:01:25.382210 1764857 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0414 11:01:25.382318 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0414 11:01:25.582157 1764857 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0414 11:01:25.582264 1764857 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0414 11:01:25.593865 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0414 11:01:26.089277 1764857 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0414 11:01:26.089374 1764857 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0414 11:01:26.195475 1764857 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0414 11:01:26.195576 1764857 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0414 11:01:26.481374 1764857 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0414 11:01:26.481404 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0414 11:01:26.492003 1764857 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0414 11:01:26.492043 1764857 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0414 11:01:26.500976 1764857 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.792687445s)
	I0414 11:01:26.501026 1764857 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0414 11:01:26.691315 1764857 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0414 11:01:26.691427 1764857 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0414 11:01:26.800183 1764857 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 11:01:26.800212 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0414 11:01:26.988409 1764857 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0414 11:01:26.988510 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0414 11:01:27.101930 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 11:01:27.186805 1764857 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0414 11:01:27.186834 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0414 11:01:27.187105 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.395826379s)
	I0414 11:01:27.187191 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.286264092s)
	I0414 11:01:27.187229 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.28288551s)
	I0414 11:01:27.187258 1764857 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.186392186s)
	I0414 11:01:27.188183 1764857 node_ready.go:35] waiting up to 6m0s for node "addons-295301" to be "Ready" ...
	I0414 11:01:27.197333 1764857 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-295301" context rescaled to 1 replicas
	I0414 11:01:27.485069 1764857 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0414 11:01:27.485197 1764857 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0414 11:01:27.681976 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0414 11:01:29.194591 1764857 node_ready.go:53] node "addons-295301" has status "Ready":"False"
	I0414 11:01:30.489015 1764857 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0414 11:01:30.489127 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:30.503747 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.413467413s)
	I0414 11:01:30.503849 1764857 addons.go:479] Verifying addon ingress=true in "addons-295301"
	I0414 11:01:30.503885 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.399588712s)
	I0414 11:01:30.503980 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.313715314s)
	I0414 11:01:30.504020 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.218035774s)
	I0414 11:01:30.504346 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.194566501s)
	I0414 11:01:30.504376 1764857 addons.go:479] Verifying addon registry=true in "addons-295301"
	I0414 11:01:30.504657 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.021960624s)
	I0414 11:01:30.504756 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.014224441s)
	I0414 11:01:30.504843 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.223453883s)
	I0414 11:01:30.504856 1764857 addons.go:479] Verifying addon metrics-server=true in "addons-295301"
	I0414 11:01:30.504920 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.911017858s)
	I0414 11:01:30.505613 1764857 out.go:177] * Verifying ingress addon...
	I0414 11:01:30.505634 1764857 out.go:177] * Verifying registry addon...
	I0414 11:01:30.507655 1764857 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-295301 service yakd-dashboard -n yakd-dashboard
	
	I0414 11:01:30.508681 1764857 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0414 11:01:30.508695 1764857 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0414 11:01:30.513672 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:30.514940 1764857 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0414 11:01:30.515011 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:30.515487 1764857 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0414 11:01:30.515505 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0414 11:01:30.516604 1764857 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0414 11:01:30.805808 1764857 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0414 11:01:30.893054 1764857 addons.go:238] Setting addon gcp-auth=true in "addons-295301"
	I0414 11:01:30.893121 1764857 host.go:66] Checking if "addons-295301" exists ...
	I0414 11:01:30.893651 1764857 cli_runner.go:164] Run: docker container inspect addons-295301 --format={{.State.Status}}
	I0414 11:01:30.916566 1764857 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0414 11:01:30.916623 1764857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-295301
	I0414 11:01:30.937458 1764857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/addons-295301/id_rsa Username:docker}
	I0414 11:01:31.012200 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:31.012467 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:31.512553 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:31.512783 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:31.594311 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.492248527s)
	W0414 11:01:31.594364 1764857 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0414 11:01:31.594392 1764857 retry.go:31] will retry after 167.949973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0414 11:01:31.690988 1764857 node_ready.go:53] node "addons-295301" has status "Ready":"False"
	I0414 11:01:31.763181 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0414 11:01:31.982165 1764857 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.0655606s)
	I0414 11:01:31.982477 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.300361565s)
	I0414 11:01:31.982566 1764857 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-295301"
	I0414 11:01:31.984175 1764857 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0414 11:01:31.984970 1764857 out.go:177] * Verifying csi-hostpath-driver addon...
	I0414 11:01:31.986463 1764857 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0414 11:01:31.987412 1764857 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0414 11:01:31.987859 1764857 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0414 11:01:31.987884 1764857 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0414 11:01:31.990469 1764857 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0414 11:01:31.990492 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:32.008558 1764857 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0414 11:01:32.008594 1764857 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0414 11:01:32.014619 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:32.014778 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:32.084772 1764857 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0414 11:01:32.084806 1764857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0414 11:01:32.103719 1764857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0414 11:01:32.494559 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:32.595931 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:32.596082 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:32.990949 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:33.011749 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:33.012052 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:33.491582 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:33.512168 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:33.512453 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:33.691205 1764857 node_ready.go:53] node "addons-295301" has status "Ready":"False"
	I0414 11:01:33.991765 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:34.012536 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:34.012721 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:34.491634 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:34.512164 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:34.512359 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:34.527253 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.76401178s)
	I0414 11:01:34.527311 1764857 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.423563313s)
	I0414 11:01:34.528411 1764857 addons.go:479] Verifying addon gcp-auth=true in "addons-295301"
	I0414 11:01:34.531161 1764857 out.go:177] * Verifying gcp-auth addon...
	I0414 11:01:34.533132 1764857 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0414 11:01:34.535543 1764857 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0414 11:01:34.535561 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:34.990565 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:35.012194 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:35.012447 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:35.035903 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:35.491644 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:35.512595 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:35.512712 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:35.536243 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:35.990974 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:36.011933 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:36.012172 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:36.037101 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:36.191806 1764857 node_ready.go:53] node "addons-295301" has status "Ready":"False"
	I0414 11:01:36.491403 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:36.512015 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:36.512217 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:36.536576 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:36.991057 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:37.011885 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:37.012047 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:37.036810 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:37.491159 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:37.512057 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:37.512148 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:37.536986 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:37.991041 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:38.011772 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:38.012005 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:38.036612 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:38.490894 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:38.511234 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:38.511321 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:38.536237 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:38.691043 1764857 node_ready.go:53] node "addons-295301" has status "Ready":"False"
	I0414 11:01:38.991420 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:39.012484 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:39.012727 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:39.036296 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:39.491779 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:39.513267 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:39.513676 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:39.535979 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:39.990853 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:40.011637 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:40.011867 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:40.036621 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:40.490457 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:40.512136 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:40.512194 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:40.537095 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:40.691861 1764857 node_ready.go:53] node "addons-295301" has status "Ready":"False"
	I0414 11:01:40.990995 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:41.011772 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:41.011978 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:41.036789 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:41.491775 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:41.512519 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:41.512570 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:41.536433 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:41.991217 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:42.011996 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:42.012185 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:42.037025 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:42.491184 1764857 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0414 11:01:42.491213 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:42.512151 1764857 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0414 11:01:42.512185 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:42.512502 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:42.539986 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:42.692521 1764857 node_ready.go:49] node "addons-295301" has status "Ready":"True"
	I0414 11:01:42.692553 1764857 node_ready.go:38] duration metric: took 15.504336858s for node "addons-295301" to be "Ready" ...
	I0414 11:01:42.692565 1764857 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 11:01:42.697224 1764857 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace to be "Ready" ...
	I0414 11:01:42.993427 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:43.093411 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:43.093455 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:43.093520 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:43.491714 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:43.512339 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:43.512467 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:43.535734 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:43.991417 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:44.012595 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:44.012823 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:44.036295 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:44.495877 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:44.583482 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:44.583889 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:44.584804 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:44.706153 1764857 pod_ready.go:103] pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace has status "Ready":"False"
	I0414 11:01:44.991384 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:45.012517 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:45.012537 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:45.083155 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:45.491299 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:45.512212 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:45.512221 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:45.537086 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:45.992183 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:46.011909 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:46.012035 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:46.036943 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:46.492033 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:46.512182 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:46.512460 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:46.536847 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:46.991682 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:47.012572 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:47.012634 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:47.036486 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:47.202604 1764857 pod_ready.go:103] pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace has status "Ready":"False"
	I0414 11:01:47.491292 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:47.512549 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:47.512718 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:47.536452 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:47.991380 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:48.012449 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:48.012762 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:48.036716 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:48.492136 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:48.511898 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:48.512030 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:48.536599 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:48.991932 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:49.012265 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:49.012519 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:49.082539 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:49.203306 1764857 pod_ready.go:103] pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace has status "Ready":"False"
	I0414 11:01:49.492747 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:49.513071 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:49.513320 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:49.582373 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:49.991994 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:50.011919 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:50.011952 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:50.036703 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:50.492359 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:50.512491 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:50.512648 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:50.593351 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:50.990835 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:51.012481 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:51.012487 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:51.036063 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:51.204346 1764857 pod_ready.go:103] pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace has status "Ready":"False"
	I0414 11:01:51.491124 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:51.511990 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:51.512116 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:51.536798 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:51.991242 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:52.011786 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:52.011943 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:52.036623 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:52.491254 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:52.511943 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:52.512214 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:52.537014 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:52.991447 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:53.012118 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:53.012148 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:53.036684 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:53.492372 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:53.512883 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:53.512894 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:53.536979 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:53.707545 1764857 pod_ready.go:103] pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace has status "Ready":"False"
	I0414 11:01:53.991895 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:54.012878 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:54.013662 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:54.036316 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:54.491213 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:54.511979 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:54.512034 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:54.536743 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:54.990721 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:55.012816 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:55.012848 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:55.036783 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:55.491770 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:55.512346 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:55.512542 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:55.536283 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:55.991010 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:56.012141 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:56.012295 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:56.037082 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:56.203338 1764857 pod_ready.go:103] pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace has status "Ready":"False"
	I0414 11:01:56.491755 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:56.512508 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:56.512556 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:56.536709 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:56.991419 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:57.012055 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:57.012143 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:57.036658 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:57.491778 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:57.512521 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:57.512524 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:57.536449 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:57.991540 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:58.012557 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:58.012573 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:58.036546 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:58.491810 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:58.512550 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:58.512629 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:58.536210 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:58.703700 1764857 pod_ready.go:103] pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace has status "Ready":"False"
	I0414 11:01:58.991490 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:59.012544 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:59.012544 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:59.036240 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:59.491096 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:01:59.511827 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:01:59.511918 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:01:59.536906 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:01:59.990824 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:00.012013 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:00.012047 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:00.036937 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:00.491644 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:00.512575 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:00.512707 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:00.582291 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:00.703930 1764857 pod_ready.go:103] pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace has status "Ready":"False"
	I0414 11:02:00.991416 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:01.084449 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:01.084782 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:01.084937 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:01.499297 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:01.583418 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:01.584652 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:01.585602 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:01.991262 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:02.084806 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:02.085009 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:02.085578 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:02.491423 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:02.513841 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:02.513860 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:02.583154 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:02.991271 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:03.012340 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:03.012441 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:03.036268 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:03.203907 1764857 pod_ready.go:103] pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace has status "Ready":"False"
	I0414 11:02:03.491701 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:03.513051 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:03.513055 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:03.581310 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:03.991290 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:04.012306 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:04.012554 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:04.036431 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:04.491448 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:04.512669 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:04.512785 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:04.537055 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:04.991047 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:05.011844 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:05.011953 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:05.036131 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:05.491305 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:05.512444 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:05.513030 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:05.582878 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:05.703783 1764857 pod_ready.go:103] pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace has status "Ready":"False"
	I0414 11:02:05.991106 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:06.012164 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:06.012415 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:06.037250 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:06.491647 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:06.512567 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:06.512609 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:06.536197 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:06.991885 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:07.012828 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:07.013043 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:07.036701 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:07.491966 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:07.512614 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:07.512713 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:07.592701 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:07.990923 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:08.012976 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:08.012998 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:08.036886 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:08.203531 1764857 pod_ready.go:103] pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace has status "Ready":"False"
	I0414 11:02:08.491755 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:08.512617 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:08.512704 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:08.536623 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:08.992674 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:09.012771 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:09.012803 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:09.036532 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:09.491213 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:09.511522 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:09.511553 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:09.592101 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:09.991214 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:10.012117 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:10.012123 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:10.037286 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:10.491654 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:10.512292 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:10.512435 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:10.535854 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:10.703309 1764857 pod_ready.go:103] pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace has status "Ready":"False"
	I0414 11:02:10.990911 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:11.012583 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:11.012745 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:11.036421 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:11.491773 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:11.512490 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:11.512649 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:11.536046 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:11.992300 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:12.012614 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:12.012638 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:12.082312 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:12.492195 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:12.593288 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:12.593377 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:12.593471 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:12.991416 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:13.012203 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:13.012307 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:13.036943 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:13.203286 1764857 pod_ready.go:93] pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace has status "Ready":"True"
	I0414 11:02:13.203312 1764857 pod_ready.go:82] duration metric: took 30.506057119s for pod "amd-gpu-device-plugin-nm4lc" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:13.203325 1764857 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-h5vxc" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:13.207902 1764857 pod_ready.go:93] pod "coredns-668d6bf9bc-h5vxc" in "kube-system" namespace has status "Ready":"True"
	I0414 11:02:13.207936 1764857 pod_ready.go:82] duration metric: took 4.603241ms for pod "coredns-668d6bf9bc-h5vxc" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:13.207962 1764857 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-295301" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:13.212044 1764857 pod_ready.go:93] pod "etcd-addons-295301" in "kube-system" namespace has status "Ready":"True"
	I0414 11:02:13.212064 1764857 pod_ready.go:82] duration metric: took 4.045622ms for pod "etcd-addons-295301" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:13.212077 1764857 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-295301" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:13.216218 1764857 pod_ready.go:93] pod "kube-apiserver-addons-295301" in "kube-system" namespace has status "Ready":"True"
	I0414 11:02:13.216242 1764857 pod_ready.go:82] duration metric: took 4.157853ms for pod "kube-apiserver-addons-295301" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:13.216256 1764857 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-295301" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:13.220424 1764857 pod_ready.go:93] pod "kube-controller-manager-addons-295301" in "kube-system" namespace has status "Ready":"True"
	I0414 11:02:13.220446 1764857 pod_ready.go:82] duration metric: took 4.182705ms for pod "kube-controller-manager-addons-295301" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:13.220462 1764857 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5mjsg" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:13.491546 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:13.512540 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:13.512756 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:13.535928 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:13.601988 1764857 pod_ready.go:93] pod "kube-proxy-5mjsg" in "kube-system" namespace has status "Ready":"True"
	I0414 11:02:13.602018 1764857 pod_ready.go:82] duration metric: took 381.547414ms for pod "kube-proxy-5mjsg" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:13.602035 1764857 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-295301" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:13.991507 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:14.002182 1764857 pod_ready.go:93] pod "kube-scheduler-addons-295301" in "kube-system" namespace has status "Ready":"True"
	I0414 11:02:14.002211 1764857 pod_ready.go:82] duration metric: took 400.167441ms for pod "kube-scheduler-addons-295301" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:14.002225 1764857 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-z4kvj" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:14.012149 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:14.012359 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:14.036264 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:14.491422 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:14.512507 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:14.512531 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:14.536193 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:14.991568 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:15.011650 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:15.011858 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:15.036602 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:15.491508 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:15.511688 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:15.511753 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:15.536482 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:15.991704 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:16.007459 1764857 pod_ready.go:103] pod "metrics-server-7fbb699795-z4kvj" in "kube-system" namespace has status "Ready":"False"
	I0414 11:02:16.012115 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:16.012167 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:16.036684 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:16.491177 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:16.512426 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:16.512481 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:16.536444 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:16.991230 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:17.092330 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:17.092491 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:17.092594 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:17.491274 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:17.512201 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:17.512241 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:17.536771 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:17.991674 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:18.011966 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:18.012006 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:18.036973 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:18.494187 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:18.508333 1764857 pod_ready.go:103] pod "metrics-server-7fbb699795-z4kvj" in "kube-system" namespace has status "Ready":"False"
	I0414 11:02:18.512424 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:18.512493 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:18.594878 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:18.991704 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:19.011205 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:19.011291 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:19.036529 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:19.491029 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:19.513461 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:19.513743 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:19.536317 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:19.991403 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:20.011325 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:20.011393 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:20.092511 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:20.491445 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:20.508837 1764857 pod_ready.go:103] pod "metrics-server-7fbb699795-z4kvj" in "kube-system" namespace has status "Ready":"False"
	I0414 11:02:20.511555 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:20.511582 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:20.536352 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:20.993850 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:21.012198 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:21.012238 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:21.036258 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:21.491340 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:21.511681 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:21.511837 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:21.536684 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:21.992581 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:22.013885 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:22.014166 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:22.037104 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:22.491164 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:22.511421 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:22.511802 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:22.536884 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:22.991781 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:23.007503 1764857 pod_ready.go:103] pod "metrics-server-7fbb699795-z4kvj" in "kube-system" namespace has status "Ready":"False"
	I0414 11:02:23.091787 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:23.092036 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:23.092163 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:23.492102 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:23.512164 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:23.512179 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0414 11:02:23.537291 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:23.990428 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:24.008266 1764857 pod_ready.go:93] pod "metrics-server-7fbb699795-z4kvj" in "kube-system" namespace has status "Ready":"True"
	I0414 11:02:24.008290 1764857 pod_ready.go:82] duration metric: took 10.006057825s for pod "metrics-server-7fbb699795-z4kvj" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:24.008301 1764857 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gmc4h" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:24.011661 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:24.011952 1764857 kapi.go:107] duration metric: took 53.503253312s to wait for kubernetes.io/minikube-addons=registry ...
	I0414 11:02:24.013058 1764857 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-gmc4h" in "kube-system" namespace has status "Ready":"True"
	I0414 11:02:24.013075 1764857 pod_ready.go:82] duration metric: took 4.768153ms for pod "nvidia-device-plugin-daemonset-gmc4h" in "kube-system" namespace to be "Ready" ...
	I0414 11:02:24.013092 1764857 pod_ready.go:39] duration metric: took 41.320509684s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0414 11:02:24.013113 1764857 api_server.go:52] waiting for apiserver process to appear ...
	I0414 11:02:24.013160 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 11:02:24.013206 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 11:02:24.036593 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:24.054066 1764857 cri.go:89] found id: "ec8e8d3111a2bf70966a505727f218131e4701c7964f400d96401df956a9c903"
	I0414 11:02:24.054094 1764857 cri.go:89] found id: ""
	I0414 11:02:24.054107 1764857 logs.go:282] 1 containers: [ec8e8d3111a2bf70966a505727f218131e4701c7964f400d96401df956a9c903]
	I0414 11:02:24.054164 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:24.057989 1764857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 11:02:24.058132 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 11:02:24.189874 1764857 cri.go:89] found id: "8bcd77c194be3f6b20cdb71a6b8d36fb159c1c57863958e0f85006db0250d358"
	I0414 11:02:24.189902 1764857 cri.go:89] found id: ""
	I0414 11:02:24.189912 1764857 logs.go:282] 1 containers: [8bcd77c194be3f6b20cdb71a6b8d36fb159c1c57863958e0f85006db0250d358]
	I0414 11:02:24.189966 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:24.194723 1764857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 11:02:24.194864 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 11:02:24.492737 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:24.499546 1764857 cri.go:89] found id: "c5d43ce02453613ba750ec3de272d72aefd06a7d939085ec15e32b45f505c26b"
	I0414 11:02:24.499574 1764857 cri.go:89] found id: ""
	I0414 11:02:24.499585 1764857 logs.go:282] 1 containers: [c5d43ce02453613ba750ec3de272d72aefd06a7d939085ec15e32b45f505c26b]
	I0414 11:02:24.499650 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:24.504355 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 11:02:24.504530 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 11:02:24.583477 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:24.583904 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:24.697540 1764857 cri.go:89] found id: "4fcbedf9c30b21d553cc12d39570ab3a404d3eee6d4b8ec2465b384d81ebf27c"
	I0414 11:02:24.697623 1764857 cri.go:89] found id: ""
	I0414 11:02:24.697644 1764857 logs.go:282] 1 containers: [4fcbedf9c30b21d553cc12d39570ab3a404d3eee6d4b8ec2465b384d81ebf27c]
	I0414 11:02:24.697725 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:24.702197 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 11:02:24.702268 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 11:02:24.810296 1764857 cri.go:89] found id: "8b5c5724c3634e9a18fd43da0ca6a9e5dab88eb741a4f883ce4bc17d0c84860c"
	I0414 11:02:24.810342 1764857 cri.go:89] found id: ""
	I0414 11:02:24.810356 1764857 logs.go:282] 1 containers: [8b5c5724c3634e9a18fd43da0ca6a9e5dab88eb741a4f883ce4bc17d0c84860c]
	I0414 11:02:24.810415 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:24.814686 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 11:02:24.814767 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 11:02:24.912192 1764857 cri.go:89] found id: "bf87b6b4987cf9ef9677ce1dd2eb67f475f36b7fb10e9300f3f147a6ccdb26a7"
	I0414 11:02:24.912220 1764857 cri.go:89] found id: ""
	I0414 11:02:24.912230 1764857 logs.go:282] 1 containers: [bf87b6b4987cf9ef9677ce1dd2eb67f475f36b7fb10e9300f3f147a6ccdb26a7]
	I0414 11:02:24.912295 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:24.915836 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 11:02:24.915914 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 11:02:24.999282 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:25.002536 1764857 cri.go:89] found id: "b5a6e4741aeb06d2b697f1ef2d6befae79548d9e84ca78354f44bc3e7f1dae2d"
	I0414 11:02:25.002563 1764857 cri.go:89] found id: ""
	I0414 11:02:25.002581 1764857 logs.go:282] 1 containers: [b5a6e4741aeb06d2b697f1ef2d6befae79548d9e84ca78354f44bc3e7f1dae2d]
	I0414 11:02:25.002640 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:25.006681 1764857 logs.go:123] Gathering logs for kube-scheduler [4fcbedf9c30b21d553cc12d39570ab3a404d3eee6d4b8ec2465b384d81ebf27c] ...
	I0414 11:02:25.006713 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4fcbedf9c30b21d553cc12d39570ab3a404d3eee6d4b8ec2465b384d81ebf27c"
	I0414 11:02:25.011808 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:25.036535 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:25.092924 1764857 logs.go:123] Gathering logs for kube-proxy [8b5c5724c3634e9a18fd43da0ca6a9e5dab88eb741a4f883ce4bc17d0c84860c] ...
	I0414 11:02:25.092973 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b5c5724c3634e9a18fd43da0ca6a9e5dab88eb741a4f883ce4bc17d0c84860c"
	I0414 11:02:25.134700 1764857 logs.go:123] Gathering logs for dmesg ...
	I0414 11:02:25.134730 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 11:02:25.188086 1764857 logs.go:123] Gathering logs for kube-apiserver [ec8e8d3111a2bf70966a505727f218131e4701c7964f400d96401df956a9c903] ...
	I0414 11:02:25.188204 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec8e8d3111a2bf70966a505727f218131e4701c7964f400d96401df956a9c903"
	I0414 11:02:25.298547 1764857 logs.go:123] Gathering logs for coredns [c5d43ce02453613ba750ec3de272d72aefd06a7d939085ec15e32b45f505c26b] ...
	I0414 11:02:25.298587 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5d43ce02453613ba750ec3de272d72aefd06a7d939085ec15e32b45f505c26b"
	I0414 11:02:25.341085 1764857 logs.go:123] Gathering logs for kube-controller-manager [bf87b6b4987cf9ef9677ce1dd2eb67f475f36b7fb10e9300f3f147a6ccdb26a7] ...
	I0414 11:02:25.341121 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf87b6b4987cf9ef9677ce1dd2eb67f475f36b7fb10e9300f3f147a6ccdb26a7"
	I0414 11:02:25.447410 1764857 logs.go:123] Gathering logs for kindnet [b5a6e4741aeb06d2b697f1ef2d6befae79548d9e84ca78354f44bc3e7f1dae2d] ...
	I0414 11:02:25.447451 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5a6e4741aeb06d2b697f1ef2d6befae79548d9e84ca78354f44bc3e7f1dae2d"
	I0414 11:02:25.491827 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:25.496732 1764857 logs.go:123] Gathering logs for CRI-O ...
	I0414 11:02:25.496763 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 11:02:25.511942 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:25.536800 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:25.575406 1764857 logs.go:123] Gathering logs for container status ...
	I0414 11:02:25.575454 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 11:02:25.624675 1764857 logs.go:123] Gathering logs for kubelet ...
	I0414 11:02:25.624711 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 11:02:25.753290 1764857 logs.go:123] Gathering logs for describe nodes ...
	I0414 11:02:25.753344 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0414 11:02:25.910807 1764857 logs.go:123] Gathering logs for etcd [8bcd77c194be3f6b20cdb71a6b8d36fb159c1c57863958e0f85006db0250d358] ...
	I0414 11:02:25.910851 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bcd77c194be3f6b20cdb71a6b8d36fb159c1c57863958e0f85006db0250d358"
	I0414 11:02:25.991642 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:26.012463 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:26.036052 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:26.491401 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:26.512315 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:26.536987 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:26.992082 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:27.011717 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:27.036487 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:27.491163 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:27.511991 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:27.536489 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:27.991147 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:28.011907 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:28.036336 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:28.490960 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:28.496854 1764857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:02:28.512358 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:28.513810 1764857 api_server.go:72] duration metric: took 1m5.214798688s to wait for apiserver process to appear ...
	I0414 11:02:28.513833 1764857 api_server.go:88] waiting for apiserver healthz status ...
	I0414 11:02:28.513877 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 11:02:28.513945 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 11:02:28.582446 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:28.688560 1764857 cri.go:89] found id: "ec8e8d3111a2bf70966a505727f218131e4701c7964f400d96401df956a9c903"
	I0414 11:02:28.688589 1764857 cri.go:89] found id: ""
	I0414 11:02:28.688599 1764857 logs.go:282] 1 containers: [ec8e8d3111a2bf70966a505727f218131e4701c7964f400d96401df956a9c903]
	I0414 11:02:28.688659 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:28.693550 1764857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 11:02:28.693641 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 11:02:28.804665 1764857 cri.go:89] found id: "8bcd77c194be3f6b20cdb71a6b8d36fb159c1c57863958e0f85006db0250d358"
	I0414 11:02:28.804691 1764857 cri.go:89] found id: ""
	I0414 11:02:28.804700 1764857 logs.go:282] 1 containers: [8bcd77c194be3f6b20cdb71a6b8d36fb159c1c57863958e0f85006db0250d358]
	I0414 11:02:28.804779 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:28.808791 1764857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 11:02:28.808888 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 11:02:28.890741 1764857 cri.go:89] found id: "c5d43ce02453613ba750ec3de272d72aefd06a7d939085ec15e32b45f505c26b"
	I0414 11:02:28.890765 1764857 cri.go:89] found id: ""
	I0414 11:02:28.890775 1764857 logs.go:282] 1 containers: [c5d43ce02453613ba750ec3de272d72aefd06a7d939085ec15e32b45f505c26b]
	I0414 11:02:28.890834 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:28.894695 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 11:02:28.894778 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 11:02:28.933245 1764857 cri.go:89] found id: "4fcbedf9c30b21d553cc12d39570ab3a404d3eee6d4b8ec2465b384d81ebf27c"
	I0414 11:02:28.933274 1764857 cri.go:89] found id: ""
	I0414 11:02:28.933284 1764857 logs.go:282] 1 containers: [4fcbedf9c30b21d553cc12d39570ab3a404d3eee6d4b8ec2465b384d81ebf27c]
	I0414 11:02:28.933348 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:28.937403 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 11:02:28.937477 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 11:02:28.992101 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:29.012597 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:29.017387 1764857 cri.go:89] found id: "8b5c5724c3634e9a18fd43da0ca6a9e5dab88eb741a4f883ce4bc17d0c84860c"
	I0414 11:02:29.017411 1764857 cri.go:89] found id: ""
	I0414 11:02:29.017420 1764857 logs.go:282] 1 containers: [8b5c5724c3634e9a18fd43da0ca6a9e5dab88eb741a4f883ce4bc17d0c84860c]
	I0414 11:02:29.017473 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:29.021326 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 11:02:29.021398 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 11:02:29.036290 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:29.103012 1764857 cri.go:89] found id: "bf87b6b4987cf9ef9677ce1dd2eb67f475f36b7fb10e9300f3f147a6ccdb26a7"
	I0414 11:02:29.103050 1764857 cri.go:89] found id: ""
	I0414 11:02:29.103062 1764857 logs.go:282] 1 containers: [bf87b6b4987cf9ef9677ce1dd2eb67f475f36b7fb10e9300f3f147a6ccdb26a7]
	I0414 11:02:29.103124 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:29.106953 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 11:02:29.107025 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 11:02:29.187831 1764857 cri.go:89] found id: "b5a6e4741aeb06d2b697f1ef2d6befae79548d9e84ca78354f44bc3e7f1dae2d"
	I0414 11:02:29.187861 1764857 cri.go:89] found id: ""
	I0414 11:02:29.187871 1764857 logs.go:282] 1 containers: [b5a6e4741aeb06d2b697f1ef2d6befae79548d9e84ca78354f44bc3e7f1dae2d]
	I0414 11:02:29.187923 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:29.192600 1764857 logs.go:123] Gathering logs for kindnet [b5a6e4741aeb06d2b697f1ef2d6befae79548d9e84ca78354f44bc3e7f1dae2d] ...
	I0414 11:02:29.192631 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5a6e4741aeb06d2b697f1ef2d6befae79548d9e84ca78354f44bc3e7f1dae2d"
	I0414 11:02:29.235449 1764857 logs.go:123] Gathering logs for container status ...
	I0414 11:02:29.235477 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 11:02:29.326852 1764857 logs.go:123] Gathering logs for kubelet ...
	I0414 11:02:29.326893 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 11:02:29.482170 1764857 logs.go:123] Gathering logs for dmesg ...
	I0414 11:02:29.482212 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 11:02:29.491060 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:29.502108 1764857 logs.go:123] Gathering logs for describe nodes ...
	I0414 11:02:29.502142 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0414 11:02:29.512847 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:29.536861 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:29.689439 1764857 logs.go:123] Gathering logs for etcd [8bcd77c194be3f6b20cdb71a6b8d36fb159c1c57863958e0f85006db0250d358] ...
	I0414 11:02:29.689484 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bcd77c194be3f6b20cdb71a6b8d36fb159c1c57863958e0f85006db0250d358"
	I0414 11:02:29.741304 1764857 logs.go:123] Gathering logs for coredns [c5d43ce02453613ba750ec3de272d72aefd06a7d939085ec15e32b45f505c26b] ...
	I0414 11:02:29.741349 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5d43ce02453613ba750ec3de272d72aefd06a7d939085ec15e32b45f505c26b"
	I0414 11:02:29.805060 1764857 logs.go:123] Gathering logs for kube-scheduler [4fcbedf9c30b21d553cc12d39570ab3a404d3eee6d4b8ec2465b384d81ebf27c] ...
	I0414 11:02:29.805109 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4fcbedf9c30b21d553cc12d39570ab3a404d3eee6d4b8ec2465b384d81ebf27c"
	I0414 11:02:29.894808 1764857 logs.go:123] Gathering logs for CRI-O ...
	I0414 11:02:29.894856 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 11:02:29.979225 1764857 logs.go:123] Gathering logs for kube-apiserver [ec8e8d3111a2bf70966a505727f218131e4701c7964f400d96401df956a9c903] ...
	I0414 11:02:29.979278 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec8e8d3111a2bf70966a505727f218131e4701c7964f400d96401df956a9c903"
	I0414 11:02:29.991606 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:30.012791 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:30.030974 1764857 logs.go:123] Gathering logs for kube-proxy [8b5c5724c3634e9a18fd43da0ca6a9e5dab88eb741a4f883ce4bc17d0c84860c] ...
	I0414 11:02:30.031018 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b5c5724c3634e9a18fd43da0ca6a9e5dab88eb741a4f883ce4bc17d0c84860c"
	I0414 11:02:30.036113 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:30.069860 1764857 logs.go:123] Gathering logs for kube-controller-manager [bf87b6b4987cf9ef9677ce1dd2eb67f475f36b7fb10e9300f3f147a6ccdb26a7] ...
	I0414 11:02:30.069892 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf87b6b4987cf9ef9677ce1dd2eb67f475f36b7fb10e9300f3f147a6ccdb26a7"
	I0414 11:02:30.684490 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:30.684604 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0414 11:02:30.685077 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:30.997214 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:31.103226 1764857 kapi.go:107] duration metric: took 56.570085167s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0414 11:02:31.103236 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:31.105044 1764857 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-295301 cluster.
	I0414 11:02:31.106285 1764857 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0414 11:02:31.107703 1764857 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0414 11:02:31.494326 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:31.582876 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:31.991955 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:32.012171 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:32.491777 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:32.512846 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:32.636912 1764857 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0414 11:02:32.641967 1764857 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0414 11:02:32.642972 1764857 api_server.go:141] control plane version: v1.32.2
	I0414 11:02:32.643008 1764857 api_server.go:131] duration metric: took 4.129167245s to wait for apiserver health ...
	I0414 11:02:32.643016 1764857 system_pods.go:43] waiting for kube-system pods to appear ...
	I0414 11:02:32.643042 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0414 11:02:32.643103 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0414 11:02:32.699664 1764857 cri.go:89] found id: "ec8e8d3111a2bf70966a505727f218131e4701c7964f400d96401df956a9c903"
	I0414 11:02:32.699694 1764857 cri.go:89] found id: ""
	I0414 11:02:32.699705 1764857 logs.go:282] 1 containers: [ec8e8d3111a2bf70966a505727f218131e4701c7964f400d96401df956a9c903]
	I0414 11:02:32.699769 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:32.703922 1764857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0414 11:02:32.703991 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0414 11:02:32.741148 1764857 cri.go:89] found id: "8bcd77c194be3f6b20cdb71a6b8d36fb159c1c57863958e0f85006db0250d358"
	I0414 11:02:32.741173 1764857 cri.go:89] found id: ""
	I0414 11:02:32.741182 1764857 logs.go:282] 1 containers: [8bcd77c194be3f6b20cdb71a6b8d36fb159c1c57863958e0f85006db0250d358]
	I0414 11:02:32.741240 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:32.781140 1764857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0414 11:02:32.781216 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0414 11:02:32.827699 1764857 cri.go:89] found id: "c5d43ce02453613ba750ec3de272d72aefd06a7d939085ec15e32b45f505c26b"
	I0414 11:02:32.827732 1764857 cri.go:89] found id: ""
	I0414 11:02:32.827744 1764857 logs.go:282] 1 containers: [c5d43ce02453613ba750ec3de272d72aefd06a7d939085ec15e32b45f505c26b]
	I0414 11:02:32.827803 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:32.832129 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0414 11:02:32.832220 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0414 11:02:32.919397 1764857 cri.go:89] found id: "4fcbedf9c30b21d553cc12d39570ab3a404d3eee6d4b8ec2465b384d81ebf27c"
	I0414 11:02:32.919426 1764857 cri.go:89] found id: ""
	I0414 11:02:32.919437 1764857 logs.go:282] 1 containers: [4fcbedf9c30b21d553cc12d39570ab3a404d3eee6d4b8ec2465b384d81ebf27c]
	I0414 11:02:32.919499 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:32.923324 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0414 11:02:32.923394 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0414 11:02:32.991510 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:33.012840 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:33.019309 1764857 cri.go:89] found id: "8b5c5724c3634e9a18fd43da0ca6a9e5dab88eb741a4f883ce4bc17d0c84860c"
	I0414 11:02:33.019333 1764857 cri.go:89] found id: ""
	I0414 11:02:33.019341 1764857 logs.go:282] 1 containers: [8b5c5724c3634e9a18fd43da0ca6a9e5dab88eb741a4f883ce4bc17d0c84860c]
	I0414 11:02:33.019388 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:33.023067 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0414 11:02:33.023135 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0414 11:02:33.109438 1764857 cri.go:89] found id: "bf87b6b4987cf9ef9677ce1dd2eb67f475f36b7fb10e9300f3f147a6ccdb26a7"
	I0414 11:02:33.109466 1764857 cri.go:89] found id: ""
	I0414 11:02:33.109475 1764857 logs.go:282] 1 containers: [bf87b6b4987cf9ef9677ce1dd2eb67f475f36b7fb10e9300f3f147a6ccdb26a7]
	I0414 11:02:33.109533 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:33.113330 1764857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0414 11:02:33.113403 1764857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0414 11:02:33.183253 1764857 cri.go:89] found id: "b5a6e4741aeb06d2b697f1ef2d6befae79548d9e84ca78354f44bc3e7f1dae2d"
	I0414 11:02:33.183275 1764857 cri.go:89] found id: ""
	I0414 11:02:33.183283 1764857 logs.go:282] 1 containers: [b5a6e4741aeb06d2b697f1ef2d6befae79548d9e84ca78354f44bc3e7f1dae2d]
	I0414 11:02:33.183328 1764857 ssh_runner.go:195] Run: which crictl
	I0414 11:02:33.187294 1764857 logs.go:123] Gathering logs for coredns [c5d43ce02453613ba750ec3de272d72aefd06a7d939085ec15e32b45f505c26b] ...
	I0414 11:02:33.187330 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5d43ce02453613ba750ec3de272d72aefd06a7d939085ec15e32b45f505c26b"
	I0414 11:02:33.230735 1764857 logs.go:123] Gathering logs for kube-scheduler [4fcbedf9c30b21d553cc12d39570ab3a404d3eee6d4b8ec2465b384d81ebf27c] ...
	I0414 11:02:33.230779 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4fcbedf9c30b21d553cc12d39570ab3a404d3eee6d4b8ec2465b384d81ebf27c"
	I0414 11:02:33.304002 1764857 logs.go:123] Gathering logs for kube-proxy [8b5c5724c3634e9a18fd43da0ca6a9e5dab88eb741a4f883ce4bc17d0c84860c] ...
	I0414 11:02:33.304043 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b5c5724c3634e9a18fd43da0ca6a9e5dab88eb741a4f883ce4bc17d0c84860c"
	I0414 11:02:33.342549 1764857 logs.go:123] Gathering logs for kube-controller-manager [bf87b6b4987cf9ef9677ce1dd2eb67f475f36b7fb10e9300f3f147a6ccdb26a7] ...
	I0414 11:02:33.342579 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf87b6b4987cf9ef9677ce1dd2eb67f475f36b7fb10e9300f3f147a6ccdb26a7"
	I0414 11:02:33.435277 1764857 logs.go:123] Gathering logs for kubelet ...
	I0414 11:02:33.435323 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0414 11:02:33.491594 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:33.512571 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:33.541901 1764857 logs.go:123] Gathering logs for kindnet [b5a6e4741aeb06d2b697f1ef2d6befae79548d9e84ca78354f44bc3e7f1dae2d] ...
	I0414 11:02:33.541950 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5a6e4741aeb06d2b697f1ef2d6befae79548d9e84ca78354f44bc3e7f1dae2d"
	I0414 11:02:33.588774 1764857 logs.go:123] Gathering logs for CRI-O ...
	I0414 11:02:33.588818 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0414 11:02:33.666109 1764857 logs.go:123] Gathering logs for container status ...
	I0414 11:02:33.666152 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0414 11:02:33.717111 1764857 logs.go:123] Gathering logs for dmesg ...
	I0414 11:02:33.717144 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0414 11:02:33.735149 1764857 logs.go:123] Gathering logs for describe nodes ...
	I0414 11:02:33.735179 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0414 11:02:33.916251 1764857 logs.go:123] Gathering logs for kube-apiserver [ec8e8d3111a2bf70966a505727f218131e4701c7964f400d96401df956a9c903] ...
	I0414 11:02:33.916294 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec8e8d3111a2bf70966a505727f218131e4701c7964f400d96401df956a9c903"
	I0414 11:02:33.991112 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:34.012618 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:34.021747 1764857 logs.go:123] Gathering logs for etcd [8bcd77c194be3f6b20cdb71a6b8d36fb159c1c57863958e0f85006db0250d358] ...
	I0414 11:02:34.021795 1764857 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bcd77c194be3f6b20cdb71a6b8d36fb159c1c57863958e0f85006db0250d358"
	I0414 11:02:34.491466 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:34.512345 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:34.991229 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:35.011759 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:35.491346 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:35.511818 1764857 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0414 11:02:35.991760 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:36.012449 1764857 kapi.go:107] duration metric: took 1m5.503761942s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0414 11:02:36.491560 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:36.613571 1764857 system_pods.go:59] 19 kube-system pods found
	I0414 11:02:36.613610 1764857 system_pods.go:61] "amd-gpu-device-plugin-nm4lc" [c90097ff-1058-422f-9d9c-bcc41a887e7c] Running
	I0414 11:02:36.613615 1764857 system_pods.go:61] "coredns-668d6bf9bc-h5vxc" [0ae7988e-1b30-4fa0-8266-6c71a6a8fb2f] Running
	I0414 11:02:36.613619 1764857 system_pods.go:61] "csi-hostpath-attacher-0" [3f7d9dfe-f6f8-4388-993d-2575755a4cac] Running
	I0414 11:02:36.613623 1764857 system_pods.go:61] "csi-hostpath-resizer-0" [1b80db24-e549-4288-8ac3-276b7c64a8cd] Running
	I0414 11:02:36.613630 1764857 system_pods.go:61] "csi-hostpathplugin-g5426" [a601e1ee-f78e-4775-b1aa-d386a64d690f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0414 11:02:36.613652 1764857 system_pods.go:61] "etcd-addons-295301" [e02124c0-2a5a-454e-98f0-740d6b872c84] Running
	I0414 11:02:36.613658 1764857 system_pods.go:61] "kindnet-ljtzk" [4d639eaa-1f9c-482c-a785-599e7e0fc549] Running
	I0414 11:02:36.613662 1764857 system_pods.go:61] "kube-apiserver-addons-295301" [11ed1b33-ad1c-4530-abd3-6a26a28add43] Running
	I0414 11:02:36.613666 1764857 system_pods.go:61] "kube-controller-manager-addons-295301" [3b387030-4c90-4d70-a075-d323489724ef] Running
	I0414 11:02:36.613671 1764857 system_pods.go:61] "kube-ingress-dns-minikube" [a1a0229c-bbeb-4a30-9b65-1b4f87b6534f] Running
	I0414 11:02:36.613675 1764857 system_pods.go:61] "kube-proxy-5mjsg" [123c9655-d43a-4f09-a916-7902ddc69233] Running
	I0414 11:02:36.613678 1764857 system_pods.go:61] "kube-scheduler-addons-295301" [6c35b369-6c04-493a-92da-94019e2b2407] Running
	I0414 11:02:36.613681 1764857 system_pods.go:61] "metrics-server-7fbb699795-z4kvj" [b4c6191a-f9ed-4e2e-9c86-aabd642b2563] Running
	I0414 11:02:36.613685 1764857 system_pods.go:61] "nvidia-device-plugin-daemonset-gmc4h" [6757ac50-f6ee-42cd-bea8-9399727ed2d9] Running
	I0414 11:02:36.613690 1764857 system_pods.go:61] "registry-6c88467877-hcqpt" [0dc9a30a-c9b5-4470-a64f-9fa51f58652d] Running
	I0414 11:02:36.613693 1764857 system_pods.go:61] "registry-proxy-k9cnv" [87c81d3e-261b-419a-9046-7bf3e82c1778] Running
	I0414 11:02:36.613698 1764857 system_pods.go:61] "snapshot-controller-68b874b76f-ms7zd" [cbe0a4e2-7237-45b2-8dd0-cb26050eaf90] Running
	I0414 11:02:36.613705 1764857 system_pods.go:61] "snapshot-controller-68b874b76f-xmlcg" [d5f4ccd7-fd32-4ee1-a6c4-1005cdaaf4c1] Running
	I0414 11:02:36.613708 1764857 system_pods.go:61] "storage-provisioner" [2d966062-4a64-4e90-b6e6-571c17a9c110] Running
	I0414 11:02:36.613713 1764857 system_pods.go:74] duration metric: took 3.970691561s to wait for pod list to return data ...
	I0414 11:02:36.613720 1764857 default_sa.go:34] waiting for default service account to be created ...
	I0414 11:02:36.615764 1764857 default_sa.go:45] found service account: "default"
	I0414 11:02:36.615788 1764857 default_sa.go:55] duration metric: took 2.061276ms for default service account to be created ...
	I0414 11:02:36.615797 1764857 system_pods.go:116] waiting for k8s-apps to be running ...
	I0414 11:02:36.619488 1764857 system_pods.go:86] 19 kube-system pods found
	I0414 11:02:36.619521 1764857 system_pods.go:89] "amd-gpu-device-plugin-nm4lc" [c90097ff-1058-422f-9d9c-bcc41a887e7c] Running
	I0414 11:02:36.619528 1764857 system_pods.go:89] "coredns-668d6bf9bc-h5vxc" [0ae7988e-1b30-4fa0-8266-6c71a6a8fb2f] Running
	I0414 11:02:36.619533 1764857 system_pods.go:89] "csi-hostpath-attacher-0" [3f7d9dfe-f6f8-4388-993d-2575755a4cac] Running
	I0414 11:02:36.619537 1764857 system_pods.go:89] "csi-hostpath-resizer-0" [1b80db24-e549-4288-8ac3-276b7c64a8cd] Running
	I0414 11:02:36.619544 1764857 system_pods.go:89] "csi-hostpathplugin-g5426" [a601e1ee-f78e-4775-b1aa-d386a64d690f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0414 11:02:36.619548 1764857 system_pods.go:89] "etcd-addons-295301" [e02124c0-2a5a-454e-98f0-740d6b872c84] Running
	I0414 11:02:36.619554 1764857 system_pods.go:89] "kindnet-ljtzk" [4d639eaa-1f9c-482c-a785-599e7e0fc549] Running
	I0414 11:02:36.619560 1764857 system_pods.go:89] "kube-apiserver-addons-295301" [11ed1b33-ad1c-4530-abd3-6a26a28add43] Running
	I0414 11:02:36.619564 1764857 system_pods.go:89] "kube-controller-manager-addons-295301" [3b387030-4c90-4d70-a075-d323489724ef] Running
	I0414 11:02:36.619568 1764857 system_pods.go:89] "kube-ingress-dns-minikube" [a1a0229c-bbeb-4a30-9b65-1b4f87b6534f] Running
	I0414 11:02:36.619572 1764857 system_pods.go:89] "kube-proxy-5mjsg" [123c9655-d43a-4f09-a916-7902ddc69233] Running
	I0414 11:02:36.619660 1764857 system_pods.go:89] "kube-scheduler-addons-295301" [6c35b369-6c04-493a-92da-94019e2b2407] Running
	I0414 11:02:36.619703 1764857 system_pods.go:89] "metrics-server-7fbb699795-z4kvj" [b4c6191a-f9ed-4e2e-9c86-aabd642b2563] Running
	I0414 11:02:36.619712 1764857 system_pods.go:89] "nvidia-device-plugin-daemonset-gmc4h" [6757ac50-f6ee-42cd-bea8-9399727ed2d9] Running
	I0414 11:02:36.619716 1764857 system_pods.go:89] "registry-6c88467877-hcqpt" [0dc9a30a-c9b5-4470-a64f-9fa51f58652d] Running
	I0414 11:02:36.619720 1764857 system_pods.go:89] "registry-proxy-k9cnv" [87c81d3e-261b-419a-9046-7bf3e82c1778] Running
	I0414 11:02:36.619724 1764857 system_pods.go:89] "snapshot-controller-68b874b76f-ms7zd" [cbe0a4e2-7237-45b2-8dd0-cb26050eaf90] Running
	I0414 11:02:36.619727 1764857 system_pods.go:89] "snapshot-controller-68b874b76f-xmlcg" [d5f4ccd7-fd32-4ee1-a6c4-1005cdaaf4c1] Running
	I0414 11:02:36.619731 1764857 system_pods.go:89] "storage-provisioner" [2d966062-4a64-4e90-b6e6-571c17a9c110] Running
	I0414 11:02:36.619742 1764857 system_pods.go:126] duration metric: took 3.939115ms to wait for k8s-apps to be running ...
	I0414 11:02:36.619752 1764857 system_svc.go:44] waiting for kubelet service to be running ....
	I0414 11:02:36.619806 1764857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 11:02:36.633501 1764857 system_svc.go:56] duration metric: took 13.735114ms WaitForService to wait for kubelet
	I0414 11:02:36.633540 1764857 kubeadm.go:582] duration metric: took 1m13.334532916s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0414 11:02:36.633566 1764857 node_conditions.go:102] verifying NodePressure condition ...
	I0414 11:02:36.636704 1764857 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0414 11:02:36.636735 1764857 node_conditions.go:123] node cpu capacity is 8
	I0414 11:02:36.636752 1764857 node_conditions.go:105] duration metric: took 3.180359ms to run NodePressure ...
	I0414 11:02:36.636765 1764857 start.go:241] waiting for startup goroutines ...
	I0414 11:02:36.991372 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:37.497284 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:37.991826 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:38.491963 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:38.991568 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:39.491610 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:39.991360 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:40.491412 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:40.991258 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:41.491458 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:41.991794 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:42.491740 1764857 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0414 11:02:42.991250 1764857 kapi.go:107] duration metric: took 1m11.003836275s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0414 11:02:42.992919 1764857 out.go:177] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, ingress-dns, storage-provisioner, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0414 11:02:42.994107 1764857 addons.go:514] duration metric: took 1m19.695170339s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin nvidia-device-plugin ingress-dns storage-provisioner inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0414 11:02:42.994155 1764857 start.go:246] waiting for cluster config update ...
	I0414 11:02:42.994173 1764857 start.go:255] writing updated cluster config ...
	I0414 11:02:42.994436 1764857 ssh_runner.go:195] Run: rm -f paused
	I0414 11:02:43.046360 1764857 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0414 11:02:43.048231 1764857 out.go:177] * Done! kubectl is now configured to use "addons-295301" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 14 11:04:18 addons-295301 crio[1055]: time="2025-04-14 11:04:18.499447457Z" level=info msg="Removed pod sandbox: d161fcb06f3f9c3f0c4b1982588d2812a6a0aa69c6e7c7f1285344e1767494f0" id=f5caa260-ecbd-4dec-a478-ab5e2b57dcf9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.026996946Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-f6hj4/POD" id=58b6b0d2-6ff1-4a98-9c70-845a9a08735e name=/runtime.v1.RuntimeService/RunPodSandbox
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.027104721Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.084509400Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-f6hj4 Namespace:default ID:ec79210e6134275e3df462e86a6e9cd92ecb3a8314f22e4177e55d9042ba9ebb UID:c2d383b5-f3d9-47fe-8078-79abc9029238 NetNS:/var/run/netns/4885237a-9d0c-4085-95f0-0da1ebb40276 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.084563035Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-f6hj4 to CNI network \"kindnet\" (type=ptp)"
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.099558322Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-f6hj4 Namespace:default ID:ec79210e6134275e3df462e86a6e9cd92ecb3a8314f22e4177e55d9042ba9ebb UID:c2d383b5-f3d9-47fe-8078-79abc9029238 NetNS:/var/run/netns/4885237a-9d0c-4085-95f0-0da1ebb40276 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.099752849Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-f6hj4 for CNI network kindnet (type=ptp)"
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.102647623Z" level=info msg="Ran pod sandbox ec79210e6134275e3df462e86a6e9cd92ecb3a8314f22e4177e55d9042ba9ebb with infra container: default/hello-world-app-7d9564db4-f6hj4/POD" id=58b6b0d2-6ff1-4a98-9c70-845a9a08735e name=/runtime.v1.RuntimeService/RunPodSandbox
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.103936150Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=314411a7-de6d-4bb5-a7c3-47be67af879d name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.104220316Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=314411a7-de6d-4bb5-a7c3-47be67af879d name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.104874801Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=1d7d4876-05d9-484e-9ec8-202e39fa9e4f name=/runtime.v1.ImageService/PullImage
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.110693536Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.330397621Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.794594858Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=1d7d4876-05d9-484e-9ec8-202e39fa9e4f name=/runtime.v1.ImageService/PullImage
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.795186041Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f1e004aa-6250-48f4-8d1b-1abdac45aae9 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.795838208Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f1e004aa-6250-48f4-8d1b-1abdac45aae9 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.796781704Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d42ad4e5-ac87-4a9d-81c1-e7a7edf28519 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.797564802Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d42ad4e5-ac87-4a9d-81c1-e7a7edf28519 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.798438270Z" level=info msg="Creating container: default/hello-world-app-7d9564db4-f6hj4/hello-world-app" id=efc605b9-9b89-4154-98a1-73a94a8249f2 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.798542382Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.814825400Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4c3acff25a7e73594ab83089d6a13868f8c5800999a009d064e0267befce8c97/merged/etc/passwd: no such file or directory"
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.814862829Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4c3acff25a7e73594ab83089d6a13868f8c5800999a009d064e0267befce8c97/merged/etc/group: no such file or directory"
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.852084296Z" level=info msg="Created container f154481125556452451748646c8b13bb779e5460e60899bf0e487c4e77606588: default/hello-world-app-7d9564db4-f6hj4/hello-world-app" id=efc605b9-9b89-4154-98a1-73a94a8249f2 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.852722917Z" level=info msg="Starting container: f154481125556452451748646c8b13bb779e5460e60899bf0e487c4e77606588" id=3abe27ca-22c2-4652-8f0e-79796e1ee3f8 name=/runtime.v1.RuntimeService/StartContainer
	Apr 14 11:05:42 addons-295301 crio[1055]: time="2025-04-14 11:05:42.858968944Z" level=info msg="Started container" PID=11145 containerID=f154481125556452451748646c8b13bb779e5460e60899bf0e487c4e77606588 description=default/hello-world-app-7d9564db4-f6hj4/hello-world-app id=3abe27ca-22c2-4652-8f0e-79796e1ee3f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec79210e6134275e3df462e86a6e9cd92ecb3a8314f22e4177e55d9042ba9ebb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	f154481125556       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   ec79210e61342       hello-world-app-7d9564db4-f6hj4
	3589d1b3a576c       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago            Running             nginx                     0                   f687943c221f1       nginx
	4a1b0e0839a9d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago            Running             busybox                   0                   3b7d29e5e2d27       busybox
	ba2d198e79fa3       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago            Running             controller                0                   cb28d3c146f46       ingress-nginx-controller-56d7c84fd4-tdwx8
	9caff146c5ca8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago            Running             local-path-provisioner    0                   36db673094e33       local-path-provisioner-76f89f99b5-5w8m8
	486a86d74828d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago            Exited              patch                     0                   dd6aa08f254f9       ingress-nginx-admission-patch-sfpns
	a002ebb98f4b2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago            Exited              create                    0                   67c68c4a8fbed       ingress-nginx-admission-create-ffhv7
	cfe08d199f78b       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             3 minutes ago            Running             minikube-ingress-dns      0                   4dc2703a2a29e       kube-ingress-dns-minikube
	fef079eeffb3f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   df1f56fe1e22f       storage-provisioner
	c5d43ce024536       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago            Running             coredns                   0                   6a7e03120c3bc       coredns-668d6bf9bc-h5vxc
	b5a6e4741aeb0       docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495                           4 minutes ago            Running             kindnet-cni               0                   5d7a9eb3be6c6       kindnet-ljtzk
	8b5c5724c3634       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             4 minutes ago            Running             kube-proxy                0                   4589e7ecec540       kube-proxy-5mjsg
	bf87b6b4987cf       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             4 minutes ago            Running             kube-controller-manager   0                   ebe76ac0868fb       kube-controller-manager-addons-295301
	ec8e8d3111a2b       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             4 minutes ago            Running             kube-apiserver            0                   f40893eed3c07       kube-apiserver-addons-295301
	4fcbedf9c30b2       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             4 minutes ago            Running             kube-scheduler            0                   e29109f50251e       kube-scheduler-addons-295301
	8bcd77c194be3       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago            Running             etcd                      0                   6a101fa4709b1       etcd-addons-295301
	
	
	==> coredns [c5d43ce02453613ba750ec3de272d72aefd06a7d939085ec15e32b45f505c26b] <==
	[INFO] 10.244.0.14:47306 - 51983 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000113948s
	[INFO] 10.244.0.14:35117 - 29372 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.007534227s
	[INFO] 10.244.0.14:35117 - 29720 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.007809483s
	[INFO] 10.244.0.14:52935 - 43199 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007028764s
	[INFO] 10.244.0.14:52935 - 43595 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007560987s
	[INFO] 10.244.0.14:36573 - 1727 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.075761574s
	[INFO] 10.244.0.14:36573 - 2078 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.076096172s
	[INFO] 10.244.0.14:39097 - 51220 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000212081s
	[INFO] 10.244.0.14:39097 - 51486 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000116922s
	[INFO] 10.244.0.21:56821 - 269 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000227498s
	[INFO] 10.244.0.21:37016 - 45704 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000368911s
	[INFO] 10.244.0.21:39174 - 54428 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128689s
	[INFO] 10.244.0.21:49914 - 11618 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147447s
	[INFO] 10.244.0.21:53513 - 25023 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128588s
	[INFO] 10.244.0.21:47613 - 39552 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000199448s
	[INFO] 10.244.0.21:42230 - 63861 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006237277s
	[INFO] 10.244.0.21:56310 - 71 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006350806s
	[INFO] 10.244.0.21:46568 - 45342 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006355127s
	[INFO] 10.244.0.21:55492 - 2150 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007187347s
	[INFO] 10.244.0.21:40935 - 61048 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006022813s
	[INFO] 10.244.0.21:49790 - 37548 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006929634s
	[INFO] 10.244.0.21:41070 - 10081 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000994094s
	[INFO] 10.244.0.21:44496 - 9273 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00109501s
	[INFO] 10.244.0.28:51088 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000223463s
	[INFO] 10.244.0.28:51511 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000166504s
	
	
	==> describe nodes <==
	Name:               addons-295301
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-295301
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=43cb59e6a4e9845c84b0379fb52045b7420d26a4
	                    minikube.k8s.io/name=addons-295301
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T11_01_19_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-295301
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 11:01:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-295301
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 11:05:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 11:03:51 +0000   Mon, 14 Apr 2025 11:01:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 11:03:51 +0000   Mon, 14 Apr 2025 11:01:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 11:03:51 +0000   Mon, 14 Apr 2025 11:01:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 11:03:51 +0000   Mon, 14 Apr 2025 11:01:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-295301
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f9369542c9d42c6bfbaf5e88afcb452
	  System UUID:                6292da80-b4c5-4a1e-a1f5-2fcb144f594f
	  Boot ID:                    bd6a00b0-3ee2-4efc-b732-99fd1a304e32
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     hello-world-app-7d9564db4-f6hj4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-tdwx8    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m13s
	  kube-system                 coredns-668d6bf9bc-h5vxc                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m20s
	  kube-system                 etcd-addons-295301                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m25s
	  kube-system                 kindnet-ljtzk                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m20s
	  kube-system                 kube-apiserver-addons-295301                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-controller-manager-addons-295301        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-proxy-5mjsg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-scheduler-addons-295301                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  local-path-storage          local-path-provisioner-76f89f99b5-5w8m8      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m15s                  kube-proxy       
	  Normal   Starting                 4m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m30s (x8 over 4m31s)  kubelet          Node addons-295301 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m30s (x8 over 4m31s)  kubelet          Node addons-295301 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m30s (x8 over 4m31s)  kubelet          Node addons-295301 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m25s                  kubelet          Node addons-295301 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m25s                  kubelet          Node addons-295301 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m25s                  kubelet          Node addons-295301 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m21s                  node-controller  Node addons-295301 event: Registered Node addons-295301 in Controller
	  Normal   NodeReady                4m1s                   kubelet          Node addons-295301 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 ea b8 d1 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev virbr0
	[  +0.000002] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 ea b8 d1 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev virbr0
	[  +0.000002] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 ea b8 d1 08 00
	[Apr14 10:26] IPv4: martian source 192.168.122.1 from 0.0.0.0, on dev virbr0
	[  +0.000011] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 2d 79 d0 08 00
	[  +0.000009] IPv4: martian source 192.168.122.1 from 0.0.0.0, on dev virbr0
	[  +0.000002] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 2d 79 d0 08 00
	[Apr14 10:39] IPv4: martian source 192.168.122.1 from 10.244.105.193, on dev virbr0
	[  +0.000010] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 48 83 d2 08 00
	[Apr14 11:03] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[  +1.006764] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[  +2.015785] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[  +4.191573] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[  +8.191181] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000033] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[ +16.126363] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[Apr14 11:04] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	
	
	==> etcd [8bcd77c194be3f6b20cdb71a6b8d36fb159c1c57863958e0f85006db0250d358] <==
	{"level":"info","ts":"2025-04-14T11:01:26.801622Z","caller":"traceutil/trace.go:171","msg":"trace[1290783660] transaction","detail":"{read_only:false; response_revision:422; number_of_response:1; }","duration":"113.03127ms","start":"2025-04-14T11:01:26.688565Z","end":"2025-04-14T11:01:26.801596Z","steps":["trace[1290783660] 'process raft request'  (duration: 112.611569ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T11:01:27.099001Z","caller":"traceutil/trace.go:171","msg":"trace[864402957] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"107.991233ms","start":"2025-04-14T11:01:26.990993Z","end":"2025-04-14T11:01:27.098984Z","steps":["trace[864402957] 'process raft request'  (duration: 107.718049ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T11:01:27.099570Z","caller":"traceutil/trace.go:171","msg":"trace[1053009090] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"108.429849ms","start":"2025-04-14T11:01:26.991125Z","end":"2025-04-14T11:01:27.099555Z","steps":["trace[1053009090] 'process raft request'  (duration: 107.620561ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T11:01:27.100521Z","caller":"traceutil/trace.go:171","msg":"trace[1879291337] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"112.446317ms","start":"2025-04-14T11:01:26.988062Z","end":"2025-04-14T11:01:27.100508Z","steps":["trace[1879291337] 'process raft request'  (duration: 108.009311ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T11:01:27.182723Z","caller":"traceutil/trace.go:171","msg":"trace[1720358763] transaction","detail":"{read_only:false; response_revision:433; number_of_response:1; }","duration":"194.606075ms","start":"2025-04-14T11:01:26.988098Z","end":"2025-04-14T11:01:27.182704Z","steps":["trace[1720358763] 'process raft request'  (duration: 110.563629ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T11:01:27.183062Z","caller":"traceutil/trace.go:171","msg":"trace[1714312169] linearizableReadLoop","detail":"{readStateIndex:442; appliedIndex:440; }","duration":"192.246232ms","start":"2025-04-14T11:01:26.990805Z","end":"2025-04-14T11:01:27.183051Z","steps":["trace[1714312169] 'read index received'  (duration: 105.275571ms)","trace[1714312169] 'applied index is now lower than readState.Index'  (duration: 86.969759ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-14T11:01:27.186478Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.613047ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-295301\" limit:1 ","response":"range_response_count:1 size:5655"}
	{"level":"info","ts":"2025-04-14T11:01:27.281412Z","caller":"traceutil/trace.go:171","msg":"trace[710074077] range","detail":"{range_begin:/registry/minions/addons-295301; range_end:; response_count:1; response_revision:436; }","duration":"291.343265ms","start":"2025-04-14T11:01:26.989815Z","end":"2025-04-14T11:01:27.281158Z","steps":["trace[710074077] 'agreement among raft nodes before linearized reading'  (duration: 196.572528ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T11:01:27.702482Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.37564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:1 size:3351"}
	{"level":"info","ts":"2025-04-14T11:01:27.703665Z","caller":"traceutil/trace.go:171","msg":"trace[380333284] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:1; response_revision:461; }","duration":"107.566287ms","start":"2025-04-14T11:01:27.596067Z","end":"2025-04-14T11:01:27.703633Z","steps":["trace[380333284] 'agreement among raft nodes before linearized reading'  (duration: 106.354452ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T11:01:27.992035Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.788429ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T11:01:28.191817Z","caller":"traceutil/trace.go:171","msg":"trace[1702773581] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:464; }","duration":"300.585165ms","start":"2025-04-14T11:01:27.891207Z","end":"2025-04-14T11:01:28.191792Z","steps":["trace[1702773581] 'range keys from in-memory index tree'  (duration: 89.614223ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T11:01:28.191944Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-14T11:01:27.891201Z","time spent":"300.719385ms","remote":"127.0.0.1:40004","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-04-14T11:01:28.282879Z","caller":"traceutil/trace.go:171","msg":"trace[240422267] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"101.638632ms","start":"2025-04-14T11:01:28.181215Z","end":"2025-04-14T11:01:28.282853Z","steps":["trace[240422267] 'process raft request'  (duration: 15.253773ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T11:01:28.284972Z","caller":"traceutil/trace.go:171","msg":"trace[946642421] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"103.651609ms","start":"2025-04-14T11:01:28.181303Z","end":"2025-04-14T11:01:28.284954Z","steps":["trace[946642421] 'process raft request'  (duration: 15.211985ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T11:01:28.285770Z","caller":"traceutil/trace.go:171","msg":"trace[1510339463] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"104.370069ms","start":"2025-04-14T11:01:28.181384Z","end":"2025-04-14T11:01:28.285754Z","steps":["trace[1510339463] 'process raft request'  (duration: 15.166408ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-14T11:01:28.286234Z","caller":"traceutil/trace.go:171","msg":"trace[1853458469] transaction","detail":"{read_only:false; response_revision:478; number_of_response:1; }","duration":"104.720259ms","start":"2025-04-14T11:01:28.181493Z","end":"2025-04-14T11:01:28.286213Z","steps":["trace[1853458469] 'process raft request'  (duration: 15.090764ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T11:01:28.286329Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.18475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T11:01:28.286380Z","caller":"traceutil/trace.go:171","msg":"trace[1480741595] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:0; response_revision:483; }","duration":"105.273492ms","start":"2025-04-14T11:01:28.181087Z","end":"2025-04-14T11:01:28.286361Z","steps":["trace[1480741595] 'agreement among raft nodes before linearized reading'  (duration: 105.183668ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T11:01:29.001326Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.743493ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T11:01:29.001416Z","caller":"traceutil/trace.go:171","msg":"trace[1228575265] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:541; }","duration":"100.865835ms","start":"2025-04-14T11:01:28.900524Z","end":"2025-04-14T11:01:29.001390Z","steps":["trace[1228575265] 'agreement among raft nodes before linearized reading'  (duration: 92.678518ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T11:02:30.611870Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.353235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-14T11:02:30.611951Z","caller":"traceutil/trace.go:171","msg":"trace[1425659016] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1152; }","duration":"121.474522ms","start":"2025-04-14T11:02:30.490462Z","end":"2025-04-14T11:02:30.611936Z","steps":["trace[1425659016] 'range keys from in-memory index tree'  (duration: 121.286234ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-14T11:02:30.612211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.057267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-04-14T11:02:30.612237Z","caller":"traceutil/trace.go:171","msg":"trace[1945526652] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1152; }","duration":"120.122042ms","start":"2025-04-14T11:02:30.492107Z","end":"2025-04-14T11:02:30.612229Z","steps":["trace[1945526652] 'range keys from in-memory index tree'  (duration: 119.94183ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:05:43 up 1 day, 19:48,  0 users,  load average: 0.40, 0.73, 0.96
	Linux addons-295301 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [b5a6e4741aeb06d2b697f1ef2d6befae79548d9e84ca78354f44bc3e7f1dae2d] <==
	I0414 11:03:41.882584       1 main.go:301] handling current node
	I0414 11:03:51.882314       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:03:51.882358       1 main.go:301] handling current node
	I0414 11:04:01.882621       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:04:01.882675       1 main.go:301] handling current node
	I0414 11:04:11.882264       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:04:11.882322       1 main.go:301] handling current node
	I0414 11:04:21.885106       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:04:21.885146       1 main.go:301] handling current node
	I0414 11:04:31.882714       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:04:31.882763       1 main.go:301] handling current node
	I0414 11:04:41.888496       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:04:41.888544       1 main.go:301] handling current node
	I0414 11:04:51.890616       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:04:51.890654       1 main.go:301] handling current node
	I0414 11:05:01.882843       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:05:01.882887       1 main.go:301] handling current node
	I0414 11:05:11.888494       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:05:11.888535       1 main.go:301] handling current node
	I0414 11:05:21.888550       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:05:21.888597       1 main.go:301] handling current node
	I0414 11:05:31.882033       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:05:31.882099       1 main.go:301] handling current node
	I0414 11:05:41.882180       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:05:41.882241       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ec8e8d3111a2bf70966a505727f218131e4701c7964f400d96401df956a9c903] <==
	I0414 11:02:24.882306       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0414 11:02:25.297201       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0414 11:02:52.804911       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42368: use of closed network connection
	E0414 11:02:52.983156       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42394: use of closed network connection
	I0414 11:03:02.092185       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.34.147"}
	I0414 11:03:21.528738       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0414 11:03:21.711078       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.199.238"}
	I0414 11:03:23.486115       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0414 11:03:24.513751       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0414 11:03:26.303355       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0414 11:03:46.217100       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0414 11:04:00.996454       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 11:04:00.996519       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 11:04:01.011368       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 11:04:01.011418       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 11:04:01.012616       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 11:04:01.012658       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 11:04:01.028896       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 11:04:01.028936       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0414 11:04:01.085021       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0414 11:04:01.085191       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0414 11:04:02.012745       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0414 11:04:02.084776       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0414 11:04:02.204438       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0414 11:05:41.934053       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.40.250"}
	
	
	==> kube-controller-manager [bf87b6b4987cf9ef9677ce1dd2eb67f475f36b7fb10e9300f3f147a6ccdb26a7] <==
	E0414 11:04:38.642881       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0414 11:04:38.643827       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 11:04:38.643875       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 11:05:12.385320       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 11:05:12.386478       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0414 11:05:12.387334       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 11:05:12.387368       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 11:05:23.862474       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 11:05:23.863600       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0414 11:05:23.864552       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 11:05:23.864604       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 11:05:24.909099       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 11:05:24.910095       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0414 11:05:24.911024       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 11:05:24.911070       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0414 11:05:25.122064       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0414 11:05:25.123171       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0414 11:05:25.124407       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0414 11:05:25.124455       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0414 11:05:41.727155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="17.745634ms"
	I0414 11:05:41.731611       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="4.408075ms"
	I0414 11:05:41.731723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="53.621µs"
	I0414 11:05:41.736480       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="92.5µs"
	I0414 11:05:43.532878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="6.376794ms"
	I0414 11:05:43.532972       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="50.579µs"
	
	
	==> kube-proxy [8b5c5724c3634e9a18fd43da0ca6a9e5dab88eb741a4f883ce4bc17d0c84860c] <==
	I0414 11:01:25.002529       1 server_linux.go:66] "Using iptables proxy"
	I0414 11:01:27.289933       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0414 11:01:27.290883       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 11:01:28.383596       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0414 11:01:28.383761       1 server_linux.go:170] "Using iptables Proxier"
	I0414 11:01:28.388893       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 11:01:28.398631       1 server.go:497] "Version info" version="v1.32.2"
	I0414 11:01:28.402665       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 11:01:28.405246       1 config.go:199] "Starting service config controller"
	I0414 11:01:28.407479       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 11:01:28.406532       1 config.go:105] "Starting endpoint slice config controller"
	I0414 11:01:28.407510       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 11:01:28.407146       1 config.go:329] "Starting node config controller"
	I0414 11:01:28.407518       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 11:01:28.580759       1 shared_informer.go:320] Caches are synced for node config
	I0414 11:01:28.580805       1 shared_informer.go:320] Caches are synced for service config
	I0414 11:01:28.580819       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4fcbedf9c30b21d553cc12d39570ab3a404d3eee6d4b8ec2465b384d81ebf27c] <==
	E0414 11:01:15.785339       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 11:01:15.784826       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0414 11:01:15.785374       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0414 11:01:15.784947       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0414 11:01:15.785401       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0414 11:01:15.784422       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 11:01:15.784958       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0414 11:01:15.785504       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0414 11:01:15.785697       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0414 11:01:15.785520       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 11:01:16.593998       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0414 11:01:16.594043       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0414 11:01:16.630965       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0414 11:01:16.631022       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 11:01:16.648699       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0414 11:01:16.648756       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 11:01:16.794906       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0414 11:01:16.794952       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0414 11:01:16.809672       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0414 11:01:16.809725       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0414 11:01:16.833607       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0414 11:01:16.833649       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0414 11:01:16.873193       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0414 11:01:16.873246       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0414 11:01:19.713900       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 11:05:18 addons-295301 kubelet[1667]: E0414 11:05:18.415459    1667 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/59d75c2a464520cfb491bbe050ac9665e8b8b7d67610962b85156f6a41902b0b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/59d75c2a464520cfb491bbe050ac9665e8b8b7d67610962b85156f6a41902b0b/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 11:05:18 addons-295301 kubelet[1667]: E0414 11:05:18.415500    1667 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7ed8372db4bffaf0d9f1f5f01bd5bdfcbfb1c1d02ce2286e94f05825d7b8381e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7ed8372db4bffaf0d9f1f5f01bd5bdfcbfb1c1d02ce2286e94f05825d7b8381e/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 11:05:18 addons-295301 kubelet[1667]: E0414 11:05:18.417697    1667 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/09d7b96a597b1d42dc253fc88708e2ce8f695c264e8c2588c4cc9406d3ff7886/diff" to get inode usage: stat /var/lib/containers/storage/overlay/09d7b96a597b1d42dc253fc88708e2ce8f695c264e8c2588c4cc9406d3ff7886/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 11:05:18 addons-295301 kubelet[1667]: E0414 11:05:18.417711    1667 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3ea22e21c4c469ff25d30249109eee2d7023b55bb1761ddd3d26abcb1d51b276/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3ea22e21c4c469ff25d30249109eee2d7023b55bb1761ddd3d26abcb1d51b276/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 11:05:18 addons-295301 kubelet[1667]: E0414 11:05:18.487679    1667 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3972e900aec58720dcec61528d42deb7e367ea79efccf7895b6482dd8769492e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3972e900aec58720dcec61528d42deb7e367ea79efccf7895b6482dd8769492e/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 11:05:18 addons-295301 kubelet[1667]: E0414 11:05:18.492019    1667 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/893180a5b1ce9dc5b11719bfe003b70c1955a335d3d9af844ea90a4f385a2608/diff" to get inode usage: stat /var/lib/containers/storage/overlay/893180a5b1ce9dc5b11719bfe003b70c1955a335d3d9af844ea90a4f385a2608/diff: no such file or directory, extraDiskErr: <nil>
	Apr 14 11:05:26 addons-295301 kubelet[1667]: I0414 11:05:26.292000    1667 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Apr 14 11:05:28 addons-295301 kubelet[1667]: E0414 11:05:28.342324    1667 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744628728341957888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:619167,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:05:28 addons-295301 kubelet[1667]: E0414 11:05:28.342381    1667 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744628728341957888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:619167,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:05:38 addons-295301 kubelet[1667]: E0414 11:05:38.346473    1667 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744628738346028816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:619167,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:05:38 addons-295301 kubelet[1667]: E0414 11:05:38.346523    1667 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744628738346028816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:619167,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:05:41 addons-295301 kubelet[1667]: I0414 11:05:41.725109    1667 memory_manager.go:355] "RemoveStaleState removing state" podUID="d5f4ccd7-fd32-4ee1-a6c4-1005cdaaf4c1" containerName="volume-snapshot-controller"
	Apr 14 11:05:41 addons-295301 kubelet[1667]: I0414 11:05:41.725167    1667 memory_manager.go:355] "RemoveStaleState removing state" podUID="3f7d9dfe-f6f8-4388-993d-2575755a4cac" containerName="csi-attacher"
	Apr 14 11:05:41 addons-295301 kubelet[1667]: I0414 11:05:41.725178    1667 memory_manager.go:355] "RemoveStaleState removing state" podUID="a601e1ee-f78e-4775-b1aa-d386a64d690f" containerName="node-driver-registrar"
	Apr 14 11:05:41 addons-295301 kubelet[1667]: I0414 11:05:41.725185    1667 memory_manager.go:355] "RemoveStaleState removing state" podUID="8fff5807-007c-4e12-8fe5-c8ea010a2d3d" containerName="task-pv-container"
	Apr 14 11:05:41 addons-295301 kubelet[1667]: I0414 11:05:41.725193    1667 memory_manager.go:355] "RemoveStaleState removing state" podUID="1b80db24-e549-4288-8ac3-276b7c64a8cd" containerName="csi-resizer"
	Apr 14 11:05:41 addons-295301 kubelet[1667]: I0414 11:05:41.725200    1667 memory_manager.go:355] "RemoveStaleState removing state" podUID="a601e1ee-f78e-4775-b1aa-d386a64d690f" containerName="csi-provisioner"
	Apr 14 11:05:41 addons-295301 kubelet[1667]: I0414 11:05:41.725211    1667 memory_manager.go:355] "RemoveStaleState removing state" podUID="a601e1ee-f78e-4775-b1aa-d386a64d690f" containerName="liveness-probe"
	Apr 14 11:05:41 addons-295301 kubelet[1667]: I0414 11:05:41.725217    1667 memory_manager.go:355] "RemoveStaleState removing state" podUID="a601e1ee-f78e-4775-b1aa-d386a64d690f" containerName="csi-external-health-monitor-controller"
	Apr 14 11:05:41 addons-295301 kubelet[1667]: I0414 11:05:41.725224    1667 memory_manager.go:355] "RemoveStaleState removing state" podUID="a601e1ee-f78e-4775-b1aa-d386a64d690f" containerName="csi-snapshotter"
	Apr 14 11:05:41 addons-295301 kubelet[1667]: I0414 11:05:41.725231    1667 memory_manager.go:355] "RemoveStaleState removing state" podUID="a601e1ee-f78e-4775-b1aa-d386a64d690f" containerName="hostpath"
	Apr 14 11:05:41 addons-295301 kubelet[1667]: I0414 11:05:41.725240    1667 memory_manager.go:355] "RemoveStaleState removing state" podUID="cbe0a4e2-7237-45b2-8dd0-cb26050eaf90" containerName="volume-snapshot-controller"
	Apr 14 11:05:41 addons-295301 kubelet[1667]: I0414 11:05:41.866496    1667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhj4z\" (UniqueName: \"kubernetes.io/projected/c2d383b5-f3d9-47fe-8078-79abc9029238-kube-api-access-qhj4z\") pod \"hello-world-app-7d9564db4-f6hj4\" (UID: \"c2d383b5-f3d9-47fe-8078-79abc9029238\") " pod="default/hello-world-app-7d9564db4-f6hj4"
	Apr 14 11:05:42 addons-295301 kubelet[1667]: W0414 11:05:42.101720    1667 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1eaf183df2538f1858ab5406034938b27d2f5b666e91c080c2b6e45da45c8cd9/crio-ec79210e6134275e3df462e86a6e9cd92ecb3a8314f22e4177e55d9042ba9ebb WatchSource:0}: Error finding container ec79210e6134275e3df462e86a6e9cd92ecb3a8314f22e4177e55d9042ba9ebb: Status 404 returned error can't find the container with id ec79210e6134275e3df462e86a6e9cd92ecb3a8314f22e4177e55d9042ba9ebb
	Apr 14 11:05:43 addons-295301 kubelet[1667]: I0414 11:05:43.526429    1667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-7d9564db4-f6hj4" podStartSLOduration=1.834813784 podStartE2EDuration="2.526399074s" podCreationTimestamp="2025-04-14 11:05:41 +0000 UTC" firstStartedPulling="2025-04-14 11:05:42.10449442 +0000 UTC m=+264.000794988" lastFinishedPulling="2025-04-14 11:05:42.796079717 +0000 UTC m=+264.692380278" observedRunningTime="2025-04-14 11:05:43.526400287 +0000 UTC m=+265.422700856" watchObservedRunningTime="2025-04-14 11:05:43.526399074 +0000 UTC m=+265.422699653"
	
	
	==> storage-provisioner [fef079eeffb3f4b272d68d154c185f5338ee447879d7189d6a3231863212f915] <==
	I0414 11:01:43.334723       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 11:01:43.382920       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 11:01:43.382972       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0414 11:01:43.391296       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0414 11:01:43.391422       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4765a1df-21a8-43d6-9f5e-07095ead3364", APIVersion:"v1", ResourceVersion:"928", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-295301_f566007f-8a47-429f-a598-f395e2ed2af5 became leader
	I0414 11:01:43.391536       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-295301_f566007f-8a47-429f-a598-f395e2ed2af5!
	I0414 11:01:43.492163       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-295301_f566007f-8a47-429f-a598-f395e2ed2af5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-295301 -n addons-295301
helpers_test.go:261: (dbg) Run:  kubectl --context addons-295301 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-ffhv7 ingress-nginx-admission-patch-sfpns
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-295301 describe pod ingress-nginx-admission-create-ffhv7 ingress-nginx-admission-patch-sfpns
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-295301 describe pod ingress-nginx-admission-create-ffhv7 ingress-nginx-admission-patch-sfpns: exit status 1 (64.650911ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ffhv7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sfpns" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-295301 describe pod ingress-nginx-admission-create-ffhv7 ingress-nginx-admission-patch-sfpns: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-295301 addons disable ingress-dns --alsologtostderr -v=1: (1.211806673s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-295301 addons disable ingress --alsologtostderr -v=1: (7.726271568s)
--- FAIL: TestAddons/parallel/Ingress (152.03s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ac3c0f87-7c97-450b-ba2d-47954aa7c000] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003760293s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-397992 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-397992 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-397992 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-397992 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [237f72df-ab98-4bb0-85de-8ad55e36802d] Pending
helpers_test.go:344: "sp-pod" [237f72df-ab98-4bb0-85de-8ad55e36802d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2025/04/14 11:09:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-397992 -n functional-397992
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-04-14 11:12:09.802243762 +0000 UTC m=+702.970631602
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-397992 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-397992 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-397992/192.168.49.2
Start Time:       Mon, 14 Apr 2025 11:09:09 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:  10.244.0.11
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xlhql (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-xlhql:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  3m                default-scheduler  Successfully assigned default/sp-pod to functional-397992
Warning  Failed     86s               kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     86s               kubelet            Error: ErrImagePull
Normal   BackOff    86s               kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     86s               kubelet            Error: ImagePullBackOff
Normal   Pulling    74s (x2 over 3m)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-397992 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-397992 logs sp-pod -n default: exit status 1 (72.390893ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-397992 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-397992
helpers_test.go:235: (dbg) docker inspect functional-397992:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc",
	        "Created": "2025-04-14T11:06:49.834352624Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1787666,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-14T11:06:49.868590846Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fa6441117abd3f0ec72d78de011fb44ecb7b1e274ddcf240e39454ed1f98f388",
	        "ResolvConfPath": "/var/lib/docker/containers/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/hosts",
	        "LogPath": "/var/lib/docker/containers/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc-json.log",
	        "Name": "/functional-397992",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-397992:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-397992",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc",
	                "LowerDir": "/var/lib/docker/overlay2/301dc51297eff587e2b311e5e517ecd1ab37f32f27af195ae887c74143b9c8ca-init/diff:/var/lib/docker/overlay2/c6d8bf10401ece8b3f73261aeb3a606dd205e8233950c57e244d9cccf977865e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/301dc51297eff587e2b311e5e517ecd1ab37f32f27af195ae887c74143b9c8ca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/301dc51297eff587e2b311e5e517ecd1ab37f32f27af195ae887c74143b9c8ca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/301dc51297eff587e2b311e5e517ecd1ab37f32f27af195ae887c74143b9c8ca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-397992",
	                "Source": "/var/lib/docker/volumes/functional-397992/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-397992",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-397992",
	                "name.minikube.sigs.k8s.io": "functional-397992",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d4323061bcecf728495c82fc5d6db56af458588f31f05466712855bbc0cad60",
	            "SandboxKey": "/var/run/docker/netns/1d4323061bce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-397992": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:c1:4c:6d:72:2d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bce73168dd39f3271483b4ecc01f803685c46614a56b6cd43f5aab2ca136d260",
	                    "EndpointID": "102208a251aa043f6d7b5643cd84d621aaa86cc175eb6ae21084dd35876ac03d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-397992",
	                        "ee9515e3608b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-397992 -n functional-397992
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-397992 logs -n 25: (1.468252011s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-397992 ssh cat                                                  | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | /etc/hostname                                                              |                   |         |         |                     |                     |
	| image          | functional-397992 image ls                                                 | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	| ssh            | functional-397992 ssh -n                                                   | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | functional-397992 sudo cat                                                 |                   |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                   |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                         | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | -p functional-397992                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                     |                   |         |         |                     |                     |
	| cp             | functional-397992 cp                                                       | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | functional-397992:/home/docker/cp-test.txt                                 |                   |         |         |                     |                     |
	|                | /tmp/TestFunctionalparallelCpCmd954204075/001/cp-test.txt                  |                   |         |         |                     |                     |
	| image          | functional-397992 image save kicbase/echo-server:functional-397992         | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-397992 ssh -n                                                   | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | functional-397992 sudo cat                                                 |                   |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                   |                   |         |         |                     |                     |
	| image          | functional-397992 image rm                                                 | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | kicbase/echo-server:functional-397992                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| cp             | functional-397992 cp                                                       | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | testdata/cp-test.txt                                                       |                   |         |         |                     |                     |
	|                | /tmp/does/not/exist/cp-test.txt                                            |                   |         |         |                     |                     |
	| image          | functional-397992 image ls                                                 | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	| ssh            | functional-397992 ssh -n                                                   | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | functional-397992 sudo cat                                                 |                   |         |         |                     |                     |
	|                | /tmp/does/not/exist/cp-test.txt                                            |                   |         |         |                     |                     |
	| image          | functional-397992 image load                                               | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| tunnel         | functional-397992 tunnel                                                   | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| tunnel         | functional-397992 tunnel                                                   | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| tunnel         | functional-397992 tunnel                                                   | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| update-context | functional-397992                                                          | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-397992                                                          | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-397992                                                          | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| image          | functional-397992                                                          | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | image ls --format short                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-397992                                                          | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | image ls --format yaml                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-397992 ssh pgrep                                                | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC |                     |
	|                | buildkitd                                                                  |                   |         |         |                     |                     |
	| image          | functional-397992 image build -t                                           | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | localhost/my-image:functional-397992                                       |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                           |                   |         |         |                     |                     |
	| image          | functional-397992 image ls                                                 | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	| image          | functional-397992                                                          | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | image ls --format json                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-397992                                                          | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | image ls --format table                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 11:09:00
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 11:09:00.864609 1802187 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:09:00.864747 1802187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:09:00.864756 1802187 out.go:358] Setting ErrFile to fd 2...
	I0414 11:09:00.864760 1802187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:09:00.864994 1802187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
	I0414 11:09:00.865529 1802187 out.go:352] Setting JSON to false
	I0414 11:09:00.866705 1802187 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":157889,"bootTime":1744471052,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:09:00.866793 1802187 start.go:139] virtualization: kvm guest
	I0414 11:09:00.868846 1802187 out.go:177] * [functional-397992] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 11:09:00.871217 1802187 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 11:09:00.871237 1802187 notify.go:220] Checking for updates...
	I0414 11:09:00.874628 1802187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:09:00.876643 1802187 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig
	I0414 11:09:00.878306 1802187 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube
	I0414 11:09:00.879752 1802187 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 11:09:00.881286 1802187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 11:09:00.883212 1802187 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:09:00.883760 1802187 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:09:00.913295 1802187 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0414 11:09:00.913379 1802187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:09:00.979616 1802187 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-14 11:09:00.967892855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:09:00.979772 1802187 docker.go:318] overlay module found
	I0414 11:09:00.981938 1802187 out.go:177] * Using the docker driver based on existing profile
	I0414 11:09:00.983292 1802187 start.go:297] selected driver: docker
	I0414 11:09:00.983307 1802187 start.go:901] validating driver "docker" against &{Name:functional-397992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-397992 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:09:00.983405 1802187 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 11:09:00.983538 1802187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:09:01.047007 1802187 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-14 11:09:01.03762816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:09:01.047687 1802187 cni.go:84] Creating CNI manager for ""
	I0414 11:09:01.047763 1802187 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0414 11:09:01.047822 1802187 start.go:340] cluster config:
	{Name:functional-397992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-397992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:09:01.050428 1802187 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Apr 14 11:09:28 functional-397992 crio[5503]: time="2025-04-14 11:09:28.978905208Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 14 11:09:28 functional-397992 crio[5503]: time="2025-04-14 11:09:28.995710342Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4588e5b95c2fbd7fb1340bebb0382b427b92794523a561975122d78f5f49f29b/merged/etc/group: no such file or directory"
	Apr 14 11:09:29 functional-397992 crio[5503]: time="2025-04-14 11:09:29.035688451Z" level=info msg="Created container b28f7aaf9f0064cfd70c95a018c1e2eb3c85a38d9b714659de9ac6bd41d4ac8c: kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-wmhqq/dashboard-metrics-scraper" id=71556533-724e-4869-883f-5f81c6351c90 name=/runtime.v1.RuntimeService/CreateContainer
	Apr 14 11:09:29 functional-397992 crio[5503]: time="2025-04-14 11:09:29.036115967Z" level=info msg="Starting container: b28f7aaf9f0064cfd70c95a018c1e2eb3c85a38d9b714659de9ac6bd41d4ac8c" id=a064bf46-3735-483e-8af6-6acf658a16da name=/runtime.v1.RuntimeService/StartContainer
	Apr 14 11:09:29 functional-397992 crio[5503]: time="2025-04-14 11:09:29.043671547Z" level=info msg="Started container" PID=8604 containerID=b28f7aaf9f0064cfd70c95a018c1e2eb3c85a38d9b714659de9ac6bd41d4ac8c description=kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b-wmhqq/dashboard-metrics-scraper id=a064bf46-3735-483e-8af6-6acf658a16da name=/runtime.v1.RuntimeService/StartContainer sandboxID=887a9e2965f3cbaafc8ae72c5a42b650e8f9cee8079e0d6a5b955ab5853035e6
	Apr 14 11:09:39 functional-397992 crio[5503]: time="2025-04-14 11:09:39.895288384Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=262d39b4-5d02-4aea-a5e1-3c121ca92027 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:09:39 functional-397992 crio[5503]: time="2025-04-14 11:09:39.895585611Z" level=info msg="Image docker.io/mysql:5.7 not found" id=262d39b4-5d02-4aea-a5e1-3c121ca92027 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:09:59 functional-397992 crio[5503]: time="2025-04-14 11:09:59.077550297Z" level=info msg="Pulling image: docker.io/nginx:latest" id=c195bc71-4459-4286-9dd0-9b8c5905e550 name=/runtime.v1.ImageService/PullImage
	Apr 14 11:09:59 functional-397992 crio[5503]: time="2025-04-14 11:09:59.082917811Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Apr 14 11:09:59 functional-397992 crio[5503]: time="2025-04-14 11:09:59.256800638Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d65a1677-38d1-4bb1-b65d-230c7f4b9f50 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:09:59 functional-397992 crio[5503]: time="2025-04-14 11:09:59.257073538Z" level=info msg="Image docker.io/nginx:alpine not found" id=d65a1677-38d1-4bb1-b65d-230c7f4b9f50 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:10:11 functional-397992 crio[5503]: time="2025-04-14 11:10:11.895049958Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=3d40a832-4e60-4a72-8a24-557f91f26df2 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:10:11 functional-397992 crio[5503]: time="2025-04-14 11:10:11.895293024Z" level=info msg="Image docker.io/nginx:alpine not found" id=3d40a832-4e60-4a72-8a24-557f91f26df2 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:10:43 functional-397992 crio[5503]: time="2025-04-14 11:10:43.299360671Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=32d9b1a7-793d-47ec-bd2a-14f398013143 name=/runtime.v1.ImageService/PullImage
	Apr 14 11:10:43 functional-397992 crio[5503]: time="2025-04-14 11:10:43.305472684Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Apr 14 11:11:13 functional-397992 crio[5503]: time="2025-04-14 11:11:13.420957328Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=810ec4ca-e616-430e-84fb-21bfcd15a99f name=/runtime.v1.ImageService/PullImage
	Apr 14 11:11:13 functional-397992 crio[5503]: time="2025-04-14 11:11:13.427524767Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Apr 14 11:11:25 functional-397992 crio[5503]: time="2025-04-14 11:11:25.895344592Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=5c720a66-ba4d-443a-83a3-c6d477c5ee4a name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:11:25 functional-397992 crio[5503]: time="2025-04-14 11:11:25.895595876Z" level=info msg="Image docker.io/mysql:5.7 not found" id=5c720a66-ba4d-443a-83a3-c6d477c5ee4a name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:11:38 functional-397992 crio[5503]: time="2025-04-14 11:11:38.895157520Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=e8ab327a-f693-44f1-8abf-5690e68096f5 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:11:38 functional-397992 crio[5503]: time="2025-04-14 11:11:38.895400905Z" level=info msg="Image docker.io/mysql:5.7 not found" id=e8ab327a-f693-44f1-8abf-5690e68096f5 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:11:43 functional-397992 crio[5503]: time="2025-04-14 11:11:43.536202000Z" level=info msg="Pulling image: docker.io/nginx:latest" id=ddc951b0-5f21-4b65-ad0e-65ee92058c63 name=/runtime.v1.ImageService/PullImage
	Apr 14 11:11:43 functional-397992 crio[5503]: time="2025-04-14 11:11:43.541002924Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Apr 14 11:11:55 functional-397992 crio[5503]: time="2025-04-14 11:11:55.895179641Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=da288e9a-683c-42c1-8f50-75046b0b380f name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:11:55 functional-397992 crio[5503]: time="2025-04-14 11:11:55.895463093Z" level=info msg="Image docker.io/nginx:alpine not found" id=da288e9a-683c-42c1-8f50-75046b0b380f name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	b28f7aaf9f006       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   2 minutes ago       Running             dashboard-metrics-scraper   0                   887a9e2965f3c       dashboard-metrics-scraper-5d59dccf9b-wmhqq
	d2509f8cecf0c       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         2 minutes ago       Running             kubernetes-dashboard        0                   743873e9e229d       kubernetes-dashboard-7779f9b69b-kmhxl
	f9b84f5e2f8ad       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              3 minutes ago       Exited              mount-munger                0                   b519ae0be3f04       busybox-mount
	15ad8b35b6c71       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago       Running             echoserver                  0                   26aff244b7a29       hello-node-fcfd88b6f-bhzs7
	510f0bcb613db       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago       Running             echoserver                  0                   2a30d0e7ed5b8       hello-node-connect-58f9cf68d8-b8vph
	c2c08afa4f2c6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago       Running             coredns                     3                   a6ec53a54873c       coredns-668d6bf9bc-mcc6f
	786146c75553e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago       Running             storage-provisioner         4                   de9adb7129e83       storage-provisioner
	6a5a1af4fde4e       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f                                                 3 minutes ago       Running             kindnet-cni                 3                   c60490d3217ba       kindnet-kqbzh
	3f6463700e2d2       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                 3 minutes ago       Running             kube-proxy                  3                   1ab5b78f6af4d       kube-proxy-fv6dh
	912dcca79261c       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                 3 minutes ago       Running             kube-apiserver              0                   cfb32d268c47a       kube-apiserver-functional-397992
	fc6452ab618e5       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 3 minutes ago       Running             etcd                        3                   0a2c0aabda440       etcd-functional-397992
	25110bcc33d0a       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                 3 minutes ago       Running             kube-controller-manager     3                   e931f99ffc530       kube-controller-manager-functional-397992
	874281c757752       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                 3 minutes ago       Running             kube-scheduler              3                   573a02ae8a80f       kube-scheduler-functional-397992
	fbaeb710ff28b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago       Exited              storage-provisioner         3                   de9adb7129e83       storage-provisioner
	e58d82dc5b3ed       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago       Exited              coredns                     2                   a6ec53a54873c       coredns-668d6bf9bc-mcc6f
	a2806b2334f56       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f                                                 4 minutes ago       Exited              kindnet-cni                 2                   c60490d3217ba       kindnet-kqbzh
	bf3202a776e06       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                 4 minutes ago       Exited              kube-proxy                  2                   1ab5b78f6af4d       kube-proxy-fv6dh
	572405034f047       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                 4 minutes ago       Exited              kube-controller-manager     2                   e931f99ffc530       kube-controller-manager-functional-397992
	1c75dd3a3a07d       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                 4 minutes ago       Exited              kube-scheduler              2                   573a02ae8a80f       kube-scheduler-functional-397992
	602d548a5e25a       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 4 minutes ago       Exited              etcd                        2                   0a2c0aabda440       etcd-functional-397992
	
	
	==> coredns [c2c08afa4f2c6302f97cc265d7cc98e9b1e9b7bcc099660121e76f6182341812] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58567 - 17606 "HINFO IN 4259550440992458422.831302973880644257. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01683688s
	
	
	==> coredns [e58d82dc5b3edfadeafe1423c76d56ee6f05de6d543988d6019c93f0834c244a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:59460 - 57570 "HINFO IN 3819271620822108180.3662015948633467775. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019372607s
	
	
	==> describe nodes <==
	Name:               functional-397992
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-397992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=43cb59e6a4e9845c84b0379fb52045b7420d26a4
	                    minikube.k8s.io/name=functional-397992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T11_07_04_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 11:07:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-397992
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 11:12:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 11:09:54 +0000   Mon, 14 Apr 2025 11:06:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 11:09:54 +0000   Mon, 14 Apr 2025 11:06:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 11:09:54 +0000   Mon, 14 Apr 2025 11:06:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 11:09:54 +0000   Mon, 14 Apr 2025 11:07:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-397992
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	System Info:
	  Machine ID:                 5060630ae9cf455f97fef0a5b6278d48
	  System UUID:                af47fcb5-b8d0-4c0e-b2e2-edf0948935f7
	  Boot ID:                    bd6a00b0-3ee2-4efc-b732-99fd1a304e32
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-b8vph           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  default                     hello-node-fcfd88b6f-bhzs7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m23s
	  default                     mysql-58ccfd96bb-t7srs                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     3m21s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-668d6bf9bc-mcc6f                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m3s
	  kube-system                 etcd-functional-397992                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m8s
	  kube-system                 kindnet-kqbzh                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m3s
	  kube-system                 kube-apiserver-functional-397992              250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 kube-controller-manager-functional-397992     200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-proxy-fv6dh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-scheduler-functional-397992              100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-wmhqq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-kmhxl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m1s                   kube-proxy       
	  Normal   Starting                 3m47s                  kube-proxy       
	  Normal   Starting                 4m15s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m14s (x8 over 5m14s)  kubelet          Node functional-397992 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 5m14s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 5m14s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5m14s (x8 over 5m14s)  kubelet          Node functional-397992 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m14s (x8 over 5m14s)  kubelet          Node functional-397992 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     5m8s                   kubelet          Node functional-397992 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 5m8s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m8s                   kubelet          Node functional-397992 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m8s                   kubelet          Node functional-397992 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m8s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m4s                   node-controller  Node functional-397992 event: Registered Node functional-397992 in Controller
	  Normal   NodeReady                4m50s                  kubelet          Node functional-397992 status is now: NodeReady
	  Normal   RegisteredNode           4m21s                  node-controller  Node functional-397992 event: Registered Node functional-397992 in Controller
	  Normal   Starting                 3m53s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m53s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node functional-397992 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node functional-397992 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m52s (x8 over 3m52s)  kubelet          Node functional-397992 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m46s                  node-controller  Node functional-397992 event: Registered Node functional-397992 in Controller
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 2d 79 d0 08 00
	[Apr14 10:39] IPv4: martian source 192.168.122.1 from 10.244.105.193, on dev virbr0
	[  +0.000010] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 48 83 d2 08 00
	[Apr14 11:03] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[  +1.006764] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[  +2.015785] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[  +4.191573] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[  +8.191181] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000033] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[ +16.126363] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[Apr14 11:04] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[Apr14 11:08] FS-Cache: Duplicate cookie detected
	[  +0.004795] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006892] FS-Cache: O-cookie d=000000003b4e49f9{9P.session} n=00000000ba1d09d0
	[  +0.007635] FS-Cache: O-key=[10] '34333334333633383138'
	[  +0.005464] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006729] FS-Cache: N-cookie d=000000003b4e49f9{9P.session} n=000000008e969b07
	[  +0.009036] FS-Cache: N-key=[10] '34333334333633383138'
	[Apr14 11:09] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [602d548a5e25ab9c928894cfaa03d2a1b620a7c613072309e942b380a6d3dc73] <==
	{"level":"info","ts":"2025-04-14T11:07:46.013920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-04-14T11:07:46.013937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-04-14T11:07:46.013952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2025-04-14T11:07:46.013960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-04-14T11:07:46.013969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2025-04-14T11:07:46.013986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-04-14T11:07:46.015130Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-397992 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T11:07:46.015145Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T11:07:46.015174Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T11:07:46.015431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T11:07:46.015468Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T11:07:46.015967Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T11:07:46.016169Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T11:07:46.016903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T11:07:46.016901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-04-14T11:08:05.915560Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-04-14T11:08:05.915643Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-397992","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2025-04-14T11:08:05.915737Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-14T11:08:05.915845Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-14T11:08:05.925663Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-14T11:08:05.925723Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-04-14T11:08:05.925776Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-04-14T11:08:05.929876Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-14T11:08:05.929982Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-14T11:08:05.929991Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-397992","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [fc6452ab618e5cb40d933e6dd2f4912382c85468c60ab52387f2dd87ac2e931a] <==
	{"level":"info","ts":"2025-04-14T11:08:19.903083Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-14T11:08:19.905906Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2025-04-14T11:08:19.906082Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T11:08:19.906126Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T11:08:19.907437Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T11:08:19.907947Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T11:08:19.907591Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-14T11:08:19.908525Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-14T11:08:19.908135Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T11:08:21.695033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-04-14T11:08:21.695086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-04-14T11:08:21.695104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-04-14T11:08:21.695116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 5"}
	{"level":"info","ts":"2025-04-14T11:08:21.695142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-04-14T11:08:21.695152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 5"}
	{"level":"info","ts":"2025-04-14T11:08:21.695159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-04-14T11:08:21.698907Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-397992 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T11:08:21.698939Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T11:08:21.698910Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T11:08:21.699170Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T11:08:21.699217Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T11:08:21.699751Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T11:08:21.699926Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T11:08:21.701094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-04-14T11:08:21.701305Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:12:11 up 1 day, 19:54,  0 users,  load average: 0.19, 0.74, 0.93
	Linux functional-397992 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6a5a1af4fde4e3a6ac49c6d166c7e83d8bac07206570a6df219b995eeffba955] <==
	I0414 11:10:03.982440       1 main.go:301] handling current node
	I0414 11:10:13.982762       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:10:13.982814       1 main.go:301] handling current node
	I0414 11:10:23.982309       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:10:23.982347       1 main.go:301] handling current node
	I0414 11:10:33.988562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:10:33.988621       1 main.go:301] handling current node
	I0414 11:10:43.982806       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:10:43.982841       1 main.go:301] handling current node
	I0414 11:10:53.990896       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:10:53.990939       1 main.go:301] handling current node
	I0414 11:11:03.982512       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:11:03.982651       1 main.go:301] handling current node
	I0414 11:11:13.982705       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:11:13.982740       1 main.go:301] handling current node
	I0414 11:11:23.982055       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:11:23.982094       1 main.go:301] handling current node
	I0414 11:11:33.985714       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:11:33.985752       1 main.go:301] handling current node
	I0414 11:11:43.985736       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:11:43.985787       1 main.go:301] handling current node
	I0414 11:11:53.990951       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:11:53.990990       1 main.go:301] handling current node
	I0414 11:12:03.988773       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:12:03.988815       1 main.go:301] handling current node
	
	
	==> kindnet [a2806b2334f56cd16f941bfe45c04d1c5472a9772cd500e86900fc407fbdf70d] <==
	I0414 11:07:58.383871       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0414 11:07:58.384143       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0414 11:07:58.384295       1 main.go:148] setting mtu 1500 for CNI 
	I0414 11:07:58.384316       1 main.go:178] kindnetd IP family: "ipv4"
	I0414 11:07:58.384326       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0414 11:07:58.781571       1 controller.go:361] Starting controller kube-network-policies
	I0414 11:07:58.781739       1 controller.go:365] Waiting for informer caches to sync
	I0414 11:07:58.781904       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0414 11:07:59.082130       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0414 11:07:59.082175       1 metrics.go:61] Registering metrics
	I0414 11:07:59.082249       1 controller.go:401] Syncing nftables rules
	
	
	==> kube-apiserver [912dcca79261c8be88ad1d62cf21a9266e5708783b6ebf67a7d99f682ec36b36] <==
	I0414 11:08:22.823456       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0414 11:08:22.824030       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0414 11:08:22.880547       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0414 11:08:22.880606       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0414 11:08:22.880630       1 shared_informer.go:320] Caches are synced for configmaps
	I0414 11:08:22.882173       1 cache.go:39] Caches are synced for autoregister controller
	I0414 11:08:22.885314       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0414 11:08:22.890225       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0414 11:08:23.004376       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 11:08:23.640124       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 11:08:24.341984       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 11:08:24.435852       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 11:08:24.489635       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 11:08:24.495941       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 11:08:26.012491       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0414 11:08:26.262407       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 11:08:26.362556       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 11:08:44.549689       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.253.99"}
	I0414 11:08:48.621661       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.253.128"}
	I0414 11:08:48.802857       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.64.165"}
	I0414 11:08:50.590217       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.194.16"}
	I0414 11:09:02.794794       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 11:09:03.089066       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.208.31"}
	I0414 11:09:03.115802       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.20.32"}
	I0414 11:09:03.819430       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.30.131"}
	
	
	==> kube-controller-manager [25110bcc33d0a83994b71ce941dd8646b65c8d632cf330ec6ff317f23c10173e] <==
	I0414 11:09:02.909432       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="19.778144ms"
	E0414 11:09:02.909556       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0414 11:09:02.914793       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="7.146954ms"
	E0414 11:09:02.914935       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0414 11:09:02.916457       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="5.794087ms"
	E0414 11:09:02.916543       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0414 11:09:02.922919       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="5.217285ms"
	E0414 11:09:02.922953       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0414 11:09:02.990588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="62.367412ms"
	I0414 11:09:02.996936       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="15.741292ms"
	I0414 11:09:03.001304       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="10.653622ms"
	I0414 11:09:03.001551       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="52.847µs"
	I0414 11:09:03.004774       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="7.779713ms"
	I0414 11:09:03.004866       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="54.352µs"
	I0414 11:09:03.081148       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="87.358µs"
	I0414 11:09:24.196366       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-397992"
	I0414 11:09:25.186790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="76.917µs"
	I0414 11:09:28.202436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="6.62128ms"
	I0414 11:09:28.202552       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="66.559µs"
	I0414 11:09:29.207869       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="6.515616ms"
	I0414 11:09:29.207999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="87.734µs"
	I0414 11:09:39.905061       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="91.662µs"
	I0414 11:09:54.591466       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-397992"
	I0414 11:11:25.904930       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="94.03µs"
	I0414 11:11:38.904834       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="74.063µs"
	
	
	==> kube-controller-manager [572405034f047e9d4d9120c834fa2df1d2d53665d6984cffb7176e16882eff84] <==
	I0414 11:07:50.339638       1 shared_informer.go:320] Caches are synced for endpoint
	I0414 11:07:50.339676       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0414 11:07:50.339787       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0414 11:07:50.342725       1 shared_informer.go:320] Caches are synced for resource quota
	I0414 11:07:50.343603       1 shared_informer.go:320] Caches are synced for node
	I0414 11:07:50.343656       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0414 11:07:50.343693       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0414 11:07:50.343704       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0414 11:07:50.343711       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0414 11:07:50.343780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-397992"
	I0414 11:07:50.346099       1 shared_informer.go:320] Caches are synced for persistent volume
	I0414 11:07:50.355437       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 11:07:50.361655       1 shared_informer.go:320] Caches are synced for resource quota
	I0414 11:07:50.367971       1 shared_informer.go:320] Caches are synced for taint
	I0414 11:07:50.368102       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0414 11:07:50.368200       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-397992"
	I0414 11:07:50.368242       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0414 11:07:50.372440       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0414 11:07:50.389493       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 11:07:50.389523       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0414 11:07:50.389537       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0414 11:07:50.598709       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="282.994498ms"
	I0414 11:07:50.598817       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="61.288µs"
	I0414 11:07:54.288011       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-397992"
	I0414 11:08:04.451439       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-397992"
	
	
	==> kube-proxy [3f6463700e2d2610638c2a3c7b8f4361f0ec6355ad9dd9035f9edb8a99977bb0] <==
	I0414 11:08:23.411613       1 server_linux.go:66] "Using iptables proxy"
	I0414 11:08:23.533227       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0414 11:08:23.533292       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 11:08:23.553588       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0414 11:08:23.553656       1 server_linux.go:170] "Using iptables Proxier"
	I0414 11:08:23.555711       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 11:08:23.556014       1 server.go:497] "Version info" version="v1.32.2"
	I0414 11:08:23.556059       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 11:08:23.557758       1 config.go:199] "Starting service config controller"
	I0414 11:08:23.557772       1 config.go:105] "Starting endpoint slice config controller"
	I0414 11:08:23.557817       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 11:08:23.557816       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 11:08:23.557864       1 config.go:329] "Starting node config controller"
	I0414 11:08:23.557873       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 11:08:23.658275       1 shared_informer.go:320] Caches are synced for node config
	I0414 11:08:23.658304       1 shared_informer.go:320] Caches are synced for service config
	I0414 11:08:23.658315       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [bf3202a776e066d2517485461c06e2439e57972ce8a48ac524bfbaa157e497da] <==
	I0414 11:07:55.306590       1 server_linux.go:66] "Using iptables proxy"
	I0414 11:07:55.413878       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0414 11:07:55.413960       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 11:07:55.438626       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0414 11:07:55.438699       1 server_linux.go:170] "Using iptables Proxier"
	I0414 11:07:55.440789       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 11:07:55.441235       1 server.go:497] "Version info" version="v1.32.2"
	I0414 11:07:55.441280       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 11:07:55.442574       1 config.go:105] "Starting endpoint slice config controller"
	I0414 11:07:55.442625       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 11:07:55.442631       1 config.go:199] "Starting service config controller"
	I0414 11:07:55.442663       1 config.go:329] "Starting node config controller"
	I0414 11:07:55.442661       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 11:07:55.442682       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 11:07:55.543129       1 shared_informer.go:320] Caches are synced for service config
	I0414 11:07:55.543257       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 11:07:55.543286       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1c75dd3a3a07d701062c6aef9ecbbe7f7deb8b5aaa95718eced9854a52571030] <==
	I0414 11:07:45.238502       1 serving.go:386] Generated self-signed cert in-memory
	W0414 11:07:47.086107       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 11:07:47.086220       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 11:07:47.086260       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 11:07:47.086295       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 11:07:47.191133       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 11:07:47.191181       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 11:07:47.193985       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 11:07:47.194041       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 11:07:47.194217       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 11:07:47.194316       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 11:07:47.294546       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 11:08:05.916472       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0414 11:08:05.916559       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0414 11:08:05.916655       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0414 11:08:05.916977       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [874281c757752d105a8556805ff79aed622d9f254ebe06a2cedb769c5325806d] <==
	I0414 11:08:20.349147       1 serving.go:386] Generated self-signed cert in-memory
	W0414 11:08:22.700652       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 11:08:22.700805       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 11:08:22.700872       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 11:08:22.700921       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 11:08:22.797113       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 11:08:22.797147       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 11:08:22.799578       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 11:08:22.799939       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 11:08:22.799966       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 11:08:22.799996       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 11:08:22.900441       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 11:11:19 functional-397992 kubelet[5878]: E0414 11:11:19.033665    5878 manager.go:1116] Failed to create existing container: /crio-a69c45b4c51b83ae12055a04706388c6c98137d1a0a2fe40e7a74b33cc163df3: Error finding container a69c45b4c51b83ae12055a04706388c6c98137d1a0a2fe40e7a74b33cc163df3: Status 404 returned error can't find the container with id a69c45b4c51b83ae12055a04706388c6c98137d1a0a2fe40e7a74b33cc163df3
	Apr 14 11:11:19 functional-397992 kubelet[5878]: E0414 11:11:19.033828    5878 manager.go:1116] Failed to create existing container: /docker/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/crio-1ab5b78f6af4dc27bd90e1e2deb1292c1128834fe16d77a8d5f6ac06c33aab10: Error finding container 1ab5b78f6af4dc27bd90e1e2deb1292c1128834fe16d77a8d5f6ac06c33aab10: Status 404 returned error can't find the container with id 1ab5b78f6af4dc27bd90e1e2deb1292c1128834fe16d77a8d5f6ac06c33aab10
	Apr 14 11:11:19 functional-397992 kubelet[5878]: E0414 11:11:19.033999    5878 manager.go:1116] Failed to create existing container: /docker/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/crio-de9adb7129e83ed839e12e4b054c645c9706c90a291bd0fbe047100331910095: Error finding container de9adb7129e83ed839e12e4b054c645c9706c90a291bd0fbe047100331910095: Status 404 returned error can't find the container with id de9adb7129e83ed839e12e4b054c645c9706c90a291bd0fbe047100331910095
	Apr 14 11:11:19 functional-397992 kubelet[5878]: E0414 11:11:19.034170    5878 manager.go:1116] Failed to create existing container: /docker/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/crio-573a02ae8a80fbabc6620086e295e793cfde16ecc574964534a698029d63ff50: Error finding container 573a02ae8a80fbabc6620086e295e793cfde16ecc574964534a698029d63ff50: Status 404 returned error can't find the container with id 573a02ae8a80fbabc6620086e295e793cfde16ecc574964534a698029d63ff50
	Apr 14 11:11:19 functional-397992 kubelet[5878]: E0414 11:11:19.034345    5878 manager.go:1116] Failed to create existing container: /crio-a6ec53a54873c74f8a4b9476ce79176d901bd78d63f74b57eadcac1b2ea72d95: Error finding container a6ec53a54873c74f8a4b9476ce79176d901bd78d63f74b57eadcac1b2ea72d95: Status 404 returned error can't find the container with id a6ec53a54873c74f8a4b9476ce79176d901bd78d63f74b57eadcac1b2ea72d95
	Apr 14 11:11:19 functional-397992 kubelet[5878]: E0414 11:11:19.034607    5878 manager.go:1116] Failed to create existing container: /crio-e8b5f563aec2a825f81f94cc56a6560fb9b9daa749b7f24d0fe3f4731afe0174: Error finding container e8b5f563aec2a825f81f94cc56a6560fb9b9daa749b7f24d0fe3f4731afe0174: Status 404 returned error can't find the container with id e8b5f563aec2a825f81f94cc56a6560fb9b9daa749b7f24d0fe3f4731afe0174
	Apr 14 11:11:19 functional-397992 kubelet[5878]: E0414 11:11:19.034822    5878 manager.go:1116] Failed to create existing container: /docker/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/crio-c60490d3217ba0373de2a4c510cda3e06ef5681b8bb2236d377efc2b34ea7d26: Error finding container c60490d3217ba0373de2a4c510cda3e06ef5681b8bb2236d377efc2b34ea7d26: Status 404 returned error can't find the container with id c60490d3217ba0373de2a4c510cda3e06ef5681b8bb2236d377efc2b34ea7d26
	Apr 14 11:11:19 functional-397992 kubelet[5878]: E0414 11:11:19.052531    5878 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629079052312144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:11:19 functional-397992 kubelet[5878]: E0414 11:11:19.052574    5878 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629079052312144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:11:25 functional-397992 kubelet[5878]: E0414 11:11:25.895909    5878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-t7srs" podUID="f5cb57c1-b791-4cef-82c2-c394d95380d1"
	Apr 14 11:11:29 functional-397992 kubelet[5878]: E0414 11:11:29.053926    5878 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629089053732993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:11:29 functional-397992 kubelet[5878]: E0414 11:11:29.053965    5878 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629089053732993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:11:39 functional-397992 kubelet[5878]: E0414 11:11:39.055468    5878 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629099055294034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:11:39 functional-397992 kubelet[5878]: E0414 11:11:39.055503    5878 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629099055294034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:11:43 functional-397992 kubelet[5878]: E0414 11:11:43.535600    5878 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Apr 14 11:11:43 functional-397992 kubelet[5878]: E0414 11:11:43.535690    5878 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Apr 14 11:11:43 functional-397992 kubelet[5878]: E0414 11:11:43.535948    5878 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zgb4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-sv
c_default(4db27b76-4527-43c2-86b0-d2afd06af2d2): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 14 11:11:43 functional-397992 kubelet[5878]: E0414 11:11:43.537371    5878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="4db27b76-4527-43c2-86b0-d2afd06af2d2"
	Apr 14 11:11:49 functional-397992 kubelet[5878]: E0414 11:11:49.057101    5878 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629109056863314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:11:49 functional-397992 kubelet[5878]: E0414 11:11:49.057146    5878 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629109056863314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:11:55 functional-397992 kubelet[5878]: E0414 11:11:55.895815    5878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="4db27b76-4527-43c2-86b0-d2afd06af2d2"
	Apr 14 11:11:59 functional-397992 kubelet[5878]: E0414 11:11:59.058589    5878 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629119058405711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:11:59 functional-397992 kubelet[5878]: E0414 11:11:59.058637    5878 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629119058405711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:12:09 functional-397992 kubelet[5878]: E0414 11:12:09.060129    5878 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629129059939044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:12:09 functional-397992 kubelet[5878]: E0414 11:12:09.060175    5878 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629129059939044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [d2509f8cecf0c37199f1cf29d0d713b80ae5ea20e074b8ee87731c870ff3c3f7] <==
	2025/04/14 11:09:27 Using namespace: kubernetes-dashboard
	2025/04/14 11:09:27 Using in-cluster config to connect to apiserver
	2025/04/14 11:09:27 Using secret token for csrf signing
	2025/04/14 11:09:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/04/14 11:09:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/04/14 11:09:27 Successful initial request to the apiserver, version: v1.32.2
	2025/04/14 11:09:27 Generating JWE encryption key
	2025/04/14 11:09:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/04/14 11:09:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/04/14 11:09:28 Initializing JWE encryption key from synchronized object
	2025/04/14 11:09:28 Creating in-cluster Sidecar client
	2025/04/14 11:09:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/14 11:09:28 Serving insecurely on HTTP port: 9090
	2025/04/14 11:09:58 Successful request to sidecar
	2025/04/14 11:09:27 Starting overwatch
	
	
	==> storage-provisioner [786146c75553e901b663f3fceda808551485ff5e7835b1af93cfdeb1332ac6d1] <==
	I0414 11:08:23.326586       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 11:08:23.389943       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 11:08:23.390066       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0414 11:08:40.787414       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0414 11:08:40.787516       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8b113c78-2c19-496b-a7b4-656c6c0d4710", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-397992_eed95be4-8492-4650-b357-7e38ffe7a486 became leader
	I0414 11:08:40.787731       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-397992_eed95be4-8492-4650-b357-7e38ffe7a486!
	I0414 11:08:40.887945       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-397992_eed95be4-8492-4650-b357-7e38ffe7a486!
	I0414 11:09:09.318859       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0414 11:09:09.319075       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"17ae6257-898d-43ac-b0fe-270ee6ac66d7", APIVersion:"v1", ResourceVersion:"876", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0414 11:09:09.318941       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    883ba934-0cc8-45ea-9114-61609882a8d5 383 0 2025-04-14 11:07:08 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-04-14 11:07:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-17ae6257-898d-43ac-b0fe-270ee6ac66d7 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  17ae6257-898d-43ac-b0fe-270ee6ac66d7 876 0 2025-04-14 11:09:09 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-04-14 11:09:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-04-14 11:09:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0414 11:09:09.319430       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-17ae6257-898d-43ac-b0fe-270ee6ac66d7" provisioned
	I0414 11:09:09.319458       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0414 11:09:09.319465       1 volume_store.go:212] Trying to save persistentvolume "pvc-17ae6257-898d-43ac-b0fe-270ee6ac66d7"
	I0414 11:09:09.327881       1 volume_store.go:219] persistentvolume "pvc-17ae6257-898d-43ac-b0fe-270ee6ac66d7" saved
	I0414 11:09:09.328035       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"17ae6257-898d-43ac-b0fe-270ee6ac66d7", APIVersion:"v1", ResourceVersion:"876", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-17ae6257-898d-43ac-b0fe-270ee6ac66d7
	
	
	==> storage-provisioner [fbaeb710ff28b84bb069df504dfd868d137e50e5e20e930cdcd2a87990c17a25] <==
	I0414 11:08:07.291260       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0414 11:08:07.292793       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-397992 -n functional-397992
helpers_test.go:261: (dbg) Run:  kubectl --context functional-397992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-t7srs nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-397992 describe pod busybox-mount mysql-58ccfd96bb-t7srs nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-397992 describe pod busybox-mount mysql-58ccfd96bb-t7srs nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-397992/192.168.49.2
	Start Time:       Mon, 14 Apr 2025 11:08:50 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://f9b84f5e2f8adb7ac4d158c3801e825e32ecb0b0151a87151537390a6f08908e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 14 Apr 2025 11:08:54 +0000
	      Finished:     Mon, 14 Apr 2025 11:08:54 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7t24w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-7t24w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m21s  default-scheduler  Successfully assigned default/busybox-mount to functional-397992
	  Normal  Pulling    3m21s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m19s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.156s (2.334s including waiting). Image size: 4631262 bytes.
	  Normal  Created    3m18s  kubelet            Created container: mount-munger
	  Normal  Started    3m18s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-t7srs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-397992/192.168.49.2
	Start Time:       Mon, 14 Apr 2025 11:08:50 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z28n6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z28n6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m21s                default-scheduler  Successfully assigned default/mysql-58ccfd96bb-t7srs to functional-397992
	  Warning  Failed     2m48s                kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     59s (x2 over 2m48s)  kubelet            Error: ErrImagePull
	  Warning  Failed     59s                  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    47s (x2 over 2m47s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     47s (x2 over 2m47s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    34s (x3 over 3m21s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-397992/192.168.49.2
	Start Time:       Mon, 14 Apr 2025 11:09:03 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zgb4r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zgb4r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m8s                 default-scheduler  Successfully assigned default/nginx-svc to functional-397992
	  Warning  Failed     29s (x2 over 2m13s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     29s (x2 over 2m13s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    17s (x2 over 2m13s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     17s (x2 over 2m13s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2s (x3 over 3m8s)    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-397992/192.168.49.2
	Start Time:       Mon, 14 Apr 2025 11:09:09 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xlhql (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-xlhql:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m2s                default-scheduler  Successfully assigned default/sp-pod to functional-397992
	  Warning  Failed     89s                 kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     89s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    89s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     89s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    77s (x2 over 3m3s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
E0414 11:12:43.691809 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.10s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (603.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-397992 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-t7srs" [f5cb57c1-b791-4cef-82c2-c394d95380d1] Pending
helpers_test.go:344: "mysql-58ccfd96bb-t7srs" [f5cb57c1-b791-4cef-82c2-c394d95380d1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1816: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1816: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-397992 -n functional-397992
functional_test.go:1816: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-04-14 11:18:50.944202698 +0000 UTC m=+1104.112590545
functional_test.go:1816: (dbg) Run:  kubectl --context functional-397992 describe po mysql-58ccfd96bb-t7srs -n default
functional_test.go:1816: (dbg) kubectl --context functional-397992 describe po mysql-58ccfd96bb-t7srs -n default:
Name:             mysql-58ccfd96bb-t7srs
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-397992/192.168.49.2
Start Time:       Mon, 14 Apr 2025 11:08:50 +0000
Labels:           app=mysql
pod-template-hash=58ccfd96bb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/mysql-58ccfd96bb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z28n6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-z28n6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-58ccfd96bb-t7srs to functional-397992
Warning  Failed     9m27s                  kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     5m38s (x2 over 7m38s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m33s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     116s (x5 over 9m27s)   kubelet            Error: ErrImagePull
Warning  Failed     116s (x2 over 4m1s)    kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     56s (x16 over 9m26s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    4s (x20 over 9m26s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1816: (dbg) Run:  kubectl --context functional-397992 logs mysql-58ccfd96bb-t7srs -n default
functional_test.go:1816: (dbg) Non-zero exit: kubectl --context functional-397992 logs mysql-58ccfd96bb-t7srs -n default: exit status 1 (74.468311ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-58ccfd96bb-t7srs" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1816: kubectl --context functional-397992 logs mysql-58ccfd96bb-t7srs -n default: exit status 1
functional_test.go:1818: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-397992
helpers_test.go:235: (dbg) docker inspect functional-397992:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc",
	        "Created": "2025-04-14T11:06:49.834352624Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1787666,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-14T11:06:49.868590846Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fa6441117abd3f0ec72d78de011fb44ecb7b1e274ddcf240e39454ed1f98f388",
	        "ResolvConfPath": "/var/lib/docker/containers/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/hosts",
	        "LogPath": "/var/lib/docker/containers/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc-json.log",
	        "Name": "/functional-397992",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-397992:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-397992",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc",
	                "LowerDir": "/var/lib/docker/overlay2/301dc51297eff587e2b311e5e517ecd1ab37f32f27af195ae887c74143b9c8ca-init/diff:/var/lib/docker/overlay2/c6d8bf10401ece8b3f73261aeb3a606dd205e8233950c57e244d9cccf977865e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/301dc51297eff587e2b311e5e517ecd1ab37f32f27af195ae887c74143b9c8ca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/301dc51297eff587e2b311e5e517ecd1ab37f32f27af195ae887c74143b9c8ca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/301dc51297eff587e2b311e5e517ecd1ab37f32f27af195ae887c74143b9c8ca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-397992",
	                "Source": "/var/lib/docker/volumes/functional-397992/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-397992",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-397992",
	                "name.minikube.sigs.k8s.io": "functional-397992",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d4323061bcecf728495c82fc5d6db56af458588f31f05466712855bbc0cad60",
	            "SandboxKey": "/var/run/docker/netns/1d4323061bce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-397992": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:c1:4c:6d:72:2d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bce73168dd39f3271483b4ecc01f803685c46614a56b6cd43f5aab2ca136d260",
	                    "EndpointID": "102208a251aa043f6d7b5643cd84d621aaa86cc175eb6ae21084dd35876ac03d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-397992",
	                        "ee9515e3608b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-397992 -n functional-397992
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-397992 logs -n 25: (1.489644763s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-397992 ssh cat                                                  | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | /etc/hostname                                                              |                   |         |         |                     |                     |
	| image          | functional-397992 image ls                                                 | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	| ssh            | functional-397992 ssh -n                                                   | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | functional-397992 sudo cat                                                 |                   |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                   |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                         | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | -p functional-397992                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                     |                   |         |         |                     |                     |
	| cp             | functional-397992 cp                                                       | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | functional-397992:/home/docker/cp-test.txt                                 |                   |         |         |                     |                     |
	|                | /tmp/TestFunctionalparallelCpCmd954204075/001/cp-test.txt                  |                   |         |         |                     |                     |
	| image          | functional-397992 image save kicbase/echo-server:functional-397992         | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-397992 ssh -n                                                   | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | functional-397992 sudo cat                                                 |                   |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                                   |                   |         |         |                     |                     |
	| image          | functional-397992 image rm                                                 | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | kicbase/echo-server:functional-397992                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| cp             | functional-397992 cp                                                       | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | testdata/cp-test.txt                                                       |                   |         |         |                     |                     |
	|                | /tmp/does/not/exist/cp-test.txt                                            |                   |         |         |                     |                     |
	| image          | functional-397992 image ls                                                 | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	| ssh            | functional-397992 ssh -n                                                   | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | functional-397992 sudo cat                                                 |                   |         |         |                     |                     |
	|                | /tmp/does/not/exist/cp-test.txt                                            |                   |         |         |                     |                     |
	| image          | functional-397992 image load                                               | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| tunnel         | functional-397992 tunnel                                                   | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| tunnel         | functional-397992 tunnel                                                   | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| tunnel         | functional-397992 tunnel                                                   | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| update-context | functional-397992                                                          | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-397992                                                          | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-397992                                                          | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| image          | functional-397992                                                          | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | image ls --format short                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-397992                                                          | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | image ls --format yaml                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-397992 ssh pgrep                                                | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC |                     |
	|                | buildkitd                                                                  |                   |         |         |                     |                     |
	| image          | functional-397992 image build -t                                           | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | localhost/my-image:functional-397992                                       |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                           |                   |         |         |                     |                     |
	| image          | functional-397992 image ls                                                 | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	| image          | functional-397992                                                          | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | image ls --format json                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-397992                                                          | functional-397992 | jenkins | v1.35.0 | 14 Apr 25 11:09 UTC | 14 Apr 25 11:09 UTC |
	|                | image ls --format table                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 11:09:00
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 11:09:00.864609 1802187 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:09:00.864747 1802187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:09:00.864756 1802187 out.go:358] Setting ErrFile to fd 2...
	I0414 11:09:00.864760 1802187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:09:00.864994 1802187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
	I0414 11:09:00.865529 1802187 out.go:352] Setting JSON to false
	I0414 11:09:00.866705 1802187 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":157889,"bootTime":1744471052,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:09:00.866793 1802187 start.go:139] virtualization: kvm guest
	I0414 11:09:00.868846 1802187 out.go:177] * [functional-397992] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 11:09:00.871217 1802187 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 11:09:00.871237 1802187 notify.go:220] Checking for updates...
	I0414 11:09:00.874628 1802187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:09:00.876643 1802187 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig
	I0414 11:09:00.878306 1802187 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube
	I0414 11:09:00.879752 1802187 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 11:09:00.881286 1802187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 11:09:00.883212 1802187 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:09:00.883760 1802187 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:09:00.913295 1802187 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0414 11:09:00.913379 1802187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:09:00.979616 1802187 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-14 11:09:00.967892855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:09:00.979772 1802187 docker.go:318] overlay module found
	I0414 11:09:00.981938 1802187 out.go:177] * Using the docker driver based on existing profile
	I0414 11:09:00.983292 1802187 start.go:297] selected driver: docker
	I0414 11:09:00.983307 1802187 start.go:901] validating driver "docker" against &{Name:functional-397992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-397992 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:09:00.983405 1802187 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 11:09:00.983538 1802187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:09:01.047007 1802187 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-14 11:09:01.03762816 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:09:01.047687 1802187 cni.go:84] Creating CNI manager for ""
	I0414 11:09:01.047763 1802187 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0414 11:09:01.047822 1802187 start.go:340] cluster config:
	{Name:functional-397992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-397992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:09:01.050428 1802187 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Apr 14 11:17:33 functional-397992 crio[5503]: time="2025-04-14 11:17:33.895400396Z" level=info msg="Image docker.io/mysql:5.7 not found" id=16ddb9b8-bd39-416c-ae7b-4f7295b0e7e2 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:17:35 functional-397992 crio[5503]: time="2025-04-14 11:17:35.895339692Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=8aee9515-93d6-4662-8f89-636321d23d75 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:17:35 functional-397992 crio[5503]: time="2025-04-14 11:17:35.895672173Z" level=info msg="Image docker.io/nginx:alpine not found" id=8aee9515-93d6-4662-8f89-636321d23d75 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:17:44 functional-397992 crio[5503]: time="2025-04-14 11:17:44.895267582Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=608e2be1-0b01-4794-9cca-5040c9ab7d88 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:17:44 functional-397992 crio[5503]: time="2025-04-14 11:17:44.895533494Z" level=info msg="Image docker.io/mysql:5.7 not found" id=608e2be1-0b01-4794-9cca-5040c9ab7d88 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:17:50 functional-397992 crio[5503]: time="2025-04-14 11:17:50.894950733Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=35afc437-f30e-46df-a99f-44137d20823a name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:17:50 functional-397992 crio[5503]: time="2025-04-14 11:17:50.895244196Z" level=info msg="Image docker.io/nginx:alpine not found" id=35afc437-f30e-46df-a99f-44137d20823a name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:17:55 functional-397992 crio[5503]: time="2025-04-14 11:17:55.895000463Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=3013a2f4-1c54-4f74-91ea-d279f7337c68 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:17:55 functional-397992 crio[5503]: time="2025-04-14 11:17:55.895214547Z" level=info msg="Image docker.io/mysql:5.7 not found" id=3013a2f4-1c54-4f74-91ea-d279f7337c68 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:05 functional-397992 crio[5503]: time="2025-04-14 11:18:05.895494369Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=ed243e82-4b2d-4368-8cf1-66f840e91f1b name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:05 functional-397992 crio[5503]: time="2025-04-14 11:18:05.895720487Z" level=info msg="Image docker.io/nginx:alpine not found" id=ed243e82-4b2d-4368-8cf1-66f840e91f1b name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:07 functional-397992 crio[5503]: time="2025-04-14 11:18:07.895156569Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=da81d9f6-9836-4d55-9992-225569291112 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:07 functional-397992 crio[5503]: time="2025-04-14 11:18:07.895435565Z" level=info msg="Image docker.io/mysql:5.7 not found" id=da81d9f6-9836-4d55-9992-225569291112 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:19 functional-397992 crio[5503]: time="2025-04-14 11:18:19.895076662Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=92a19f74-1d49-47b1-bc30-624c9f617793 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:19 functional-397992 crio[5503]: time="2025-04-14 11:18:19.895310491Z" level=info msg="Image docker.io/nginx:alpine not found" id=92a19f74-1d49-47b1-bc30-624c9f617793 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:22 functional-397992 crio[5503]: time="2025-04-14 11:18:22.895420122Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=82a87309-4190-4e88-8df8-b9fcc48f6ff6 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:22 functional-397992 crio[5503]: time="2025-04-14 11:18:22.895674274Z" level=info msg="Image docker.io/mysql:5.7 not found" id=82a87309-4190-4e88-8df8-b9fcc48f6ff6 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:33 functional-397992 crio[5503]: time="2025-04-14 11:18:33.894509680Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=be93fe1b-ec8c-44c5-a0c8-7054f3d91ebd name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:33 functional-397992 crio[5503]: time="2025-04-14 11:18:33.894807618Z" level=info msg="Image docker.io/nginx:alpine not found" id=be93fe1b-ec8c-44c5-a0c8-7054f3d91ebd name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:34 functional-397992 crio[5503]: time="2025-04-14 11:18:34.894544836Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=4e22117e-f629-45ae-9375-3539834f1d28 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:34 functional-397992 crio[5503]: time="2025-04-14 11:18:34.894819524Z" level=info msg="Image docker.io/mysql:5.7 not found" id=4e22117e-f629-45ae-9375-3539834f1d28 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:46 functional-397992 crio[5503]: time="2025-04-14 11:18:46.894803452Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7739df00-0c3e-431e-ae29-a081cfa3464c name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:46 functional-397992 crio[5503]: time="2025-04-14 11:18:46.895069430Z" level=info msg="Image docker.io/nginx:alpine not found" id=7739df00-0c3e-431e-ae29-a081cfa3464c name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:47 functional-397992 crio[5503]: time="2025-04-14 11:18:47.895244772Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=9cd25c75-20b2-4c67-8143-058f5ee42696 name=/runtime.v1.ImageService/ImageStatus
	Apr 14 11:18:47 functional-397992 crio[5503]: time="2025-04-14 11:18:47.895470524Z" level=info msg="Image docker.io/mysql:5.7 not found" id=9cd25c75-20b2-4c67-8143-058f5ee42696 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	b28f7aaf9f006       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   887a9e2965f3c       dashboard-metrics-scraper-5d59dccf9b-wmhqq
	d2509f8cecf0c       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         9 minutes ago       Running             kubernetes-dashboard        0                   743873e9e229d       kubernetes-dashboard-7779f9b69b-kmhxl
	f9b84f5e2f8ad       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              9 minutes ago       Exited              mount-munger                0                   b519ae0be3f04       busybox-mount
	15ad8b35b6c71       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   26aff244b7a29       hello-node-fcfd88b6f-bhzs7
	510f0bcb613db       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   2a30d0e7ed5b8       hello-node-connect-58f9cf68d8-b8vph
	c2c08afa4f2c6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 10 minutes ago      Running             coredns                     3                   a6ec53a54873c       coredns-668d6bf9bc-mcc6f
	786146c75553e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         4                   de9adb7129e83       storage-provisioner
	6a5a1af4fde4e       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f                                                 10 minutes ago      Running             kindnet-cni                 3                   c60490d3217ba       kindnet-kqbzh
	3f6463700e2d2       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                 10 minutes ago      Running             kube-proxy                  3                   1ab5b78f6af4d       kube-proxy-fv6dh
	912dcca79261c       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                 10 minutes ago      Running             kube-apiserver              0                   cfb32d268c47a       kube-apiserver-functional-397992
	fc6452ab618e5       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 10 minutes ago      Running             etcd                        3                   0a2c0aabda440       etcd-functional-397992
	25110bcc33d0a       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                 10 minutes ago      Running             kube-controller-manager     3                   e931f99ffc530       kube-controller-manager-functional-397992
	874281c757752       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                 10 minutes ago      Running             kube-scheduler              3                   573a02ae8a80f       kube-scheduler-functional-397992
	fbaeb710ff28b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Exited              storage-provisioner         3                   de9adb7129e83       storage-provisioner
	e58d82dc5b3ed       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 10 minutes ago      Exited              coredns                     2                   a6ec53a54873c       coredns-668d6bf9bc-mcc6f
	a2806b2334f56       df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f                                                 10 minutes ago      Exited              kindnet-cni                 2                   c60490d3217ba       kindnet-kqbzh
	bf3202a776e06       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                 10 minutes ago      Exited              kube-proxy                  2                   1ab5b78f6af4d       kube-proxy-fv6dh
	572405034f047       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                 11 minutes ago      Exited              kube-controller-manager     2                   e931f99ffc530       kube-controller-manager-functional-397992
	1c75dd3a3a07d       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                 11 minutes ago      Exited              kube-scheduler              2                   573a02ae8a80f       kube-scheduler-functional-397992
	602d548a5e25a       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 11 minutes ago      Exited              etcd                        2                   0a2c0aabda440       etcd-functional-397992
	
	
	==> coredns [c2c08afa4f2c6302f97cc265d7cc98e9b1e9b7bcc099660121e76f6182341812] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58567 - 17606 "HINFO IN 4259550440992458422.831302973880644257. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.01683688s
	
	
	==> coredns [e58d82dc5b3edfadeafe1423c76d56ee6f05de6d543988d6019c93f0834c244a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:59460 - 57570 "HINFO IN 3819271620822108180.3662015948633467775. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019372607s
	
	
	==> describe nodes <==
	Name:               functional-397992
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-397992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=43cb59e6a4e9845c84b0379fb52045b7420d26a4
	                    minikube.k8s.io/name=functional-397992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_14T11_07_04_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Apr 2025 11:07:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-397992
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Apr 2025 11:18:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Apr 2025 11:16:33 +0000   Mon, 14 Apr 2025 11:06:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Apr 2025 11:16:33 +0000   Mon, 14 Apr 2025 11:06:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Apr 2025 11:16:33 +0000   Mon, 14 Apr 2025 11:06:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Apr 2025 11:16:33 +0000   Mon, 14 Apr 2025 11:07:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-397992
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859372Ki
	  pods:               110
	System Info:
	  Machine ID:                 5060630ae9cf455f97fef0a5b6278d48
	  System UUID:                af47fcb5-b8d0-4c0e-b2e2-edf0948935f7
	  Boot ID:                    bd6a00b0-3ee2-4efc-b732-99fd1a304e32
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-b8vph           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-fcfd88b6f-bhzs7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-58ccfd96bb-t7srs                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	  kube-system                 coredns-668d6bf9bc-mcc6f                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-397992                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-kqbzh                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-397992              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-397992     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-fv6dh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-397992              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-wmhqq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-kmhxl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-397992 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-397992 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-397992 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     11m                kubelet          Node functional-397992 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node functional-397992 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node functional-397992 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           11m                node-controller  Node functional-397992 event: Registered Node functional-397992 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-397992 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-397992 event: Registered Node functional-397992 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-397992 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-397992 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-397992 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-397992 event: Registered Node functional-397992 in Controller
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 2d 79 d0 08 00
	[Apr14 10:39] IPv4: martian source 192.168.122.1 from 10.244.105.193, on dev virbr0
	[  +0.000010] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 48 83 d2 08 00
	[Apr14 11:03] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[  +1.006764] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[  +2.015785] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[  +4.191573] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[  +8.191181] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000033] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[ +16.126363] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[Apr14 11:04] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a2 27 4b 62 70 e0 42 a3 77 61 a3 c2 08 00
	[Apr14 11:08] FS-Cache: Duplicate cookie detected
	[  +0.004795] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006892] FS-Cache: O-cookie d=000000003b4e49f9{9P.session} n=00000000ba1d09d0
	[  +0.007635] FS-Cache: O-key=[10] '34333334333633383138'
	[  +0.005464] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006729] FS-Cache: N-cookie d=000000003b4e49f9{9P.session} n=000000008e969b07
	[  +0.009036] FS-Cache: N-key=[10] '34333334333633383138'
	[Apr14 11:09] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [602d548a5e25ab9c928894cfaa03d2a1b620a7c613072309e942b380a6d3dc73] <==
	{"level":"info","ts":"2025-04-14T11:07:46.013920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-04-14T11:07:46.013937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-04-14T11:07:46.013952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2025-04-14T11:07:46.013960Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-04-14T11:07:46.013969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2025-04-14T11:07:46.013986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-04-14T11:07:46.015130Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-397992 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T11:07:46.015145Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T11:07:46.015174Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T11:07:46.015431Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T11:07:46.015468Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T11:07:46.015967Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T11:07:46.016169Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T11:07:46.016903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T11:07:46.016901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-04-14T11:08:05.915560Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-04-14T11:08:05.915643Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-397992","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2025-04-14T11:08:05.915737Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-14T11:08:05.915845Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-14T11:08:05.925663Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-04-14T11:08:05.925723Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-04-14T11:08:05.925776Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-04-14T11:08:05.929876Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-14T11:08:05.929982Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-14T11:08:05.929991Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-397992","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [fc6452ab618e5cb40d933e6dd2f4912382c85468c60ab52387f2dd87ac2e931a] <==
	{"level":"info","ts":"2025-04-14T11:08:19.906126Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-14T11:08:19.907437Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-14T11:08:19.907947Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-14T11:08:19.907591Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-14T11:08:19.908525Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-14T11:08:19.908135Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-14T11:08:21.695033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-04-14T11:08:21.695086Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-04-14T11:08:21.695104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-04-14T11:08:21.695116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 5"}
	{"level":"info","ts":"2025-04-14T11:08:21.695142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-04-14T11:08:21.695152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 5"}
	{"level":"info","ts":"2025-04-14T11:08:21.695159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 5"}
	{"level":"info","ts":"2025-04-14T11:08:21.698907Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-397992 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-14T11:08:21.698939Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T11:08:21.698910Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-14T11:08:21.699170Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-14T11:08:21.699217Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-14T11:08:21.699751Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T11:08:21.699926Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-14T11:08:21.701094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-04-14T11:08:21.701305Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-14T11:18:21.717134Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1157}
	{"level":"info","ts":"2025-04-14T11:18:21.729613Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1157,"took":"12.081273ms","hash":500231895,"current-db-size-bytes":4268032,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":1888256,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-04-14T11:18:21.729684Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":500231895,"revision":1157,"compact-revision":-1}
	
	
	==> kernel <==
	 11:18:52 up 1 day, 20:01,  0 users,  load average: 0.04, 0.26, 0.63
	Linux functional-397992 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6a5a1af4fde4e3a6ac49c6d166c7e83d8bac07206570a6df219b995eeffba955] <==
	I0414 11:16:43.982524       1 main.go:301] handling current node
	I0414 11:16:53.988794       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:16:53.988831       1 main.go:301] handling current node
	I0414 11:17:03.984957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:17:03.984998       1 main.go:301] handling current node
	I0414 11:17:13.984504       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:17:13.984576       1 main.go:301] handling current node
	I0414 11:17:23.982078       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:17:23.982119       1 main.go:301] handling current node
	I0414 11:17:33.982777       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:17:33.982816       1 main.go:301] handling current node
	I0414 11:17:43.982135       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:17:43.982170       1 main.go:301] handling current node
	I0414 11:17:53.982676       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:17:53.982737       1 main.go:301] handling current node
	I0414 11:18:03.984490       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:18:03.984547       1 main.go:301] handling current node
	I0414 11:18:13.984493       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:18:13.984546       1 main.go:301] handling current node
	I0414 11:18:23.982949       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:18:23.982991       1 main.go:301] handling current node
	I0414 11:18:33.982856       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:18:33.982901       1 main.go:301] handling current node
	I0414 11:18:43.982441       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0414 11:18:43.982508       1 main.go:301] handling current node
	
	
	==> kindnet [a2806b2334f56cd16f941bfe45c04d1c5472a9772cd500e86900fc407fbdf70d] <==
	I0414 11:07:58.383871       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0414 11:07:58.384143       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0414 11:07:58.384295       1 main.go:148] setting mtu 1500 for CNI 
	I0414 11:07:58.384316       1 main.go:178] kindnetd IP family: "ipv4"
	I0414 11:07:58.384326       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0414 11:07:58.781571       1 controller.go:361] Starting controller kube-network-policies
	I0414 11:07:58.781739       1 controller.go:365] Waiting for informer caches to sync
	I0414 11:07:58.781904       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0414 11:07:59.082130       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0414 11:07:59.082175       1 metrics.go:61] Registering metrics
	I0414 11:07:59.082249       1 controller.go:401] Syncing nftables rules
	
	
	==> kube-apiserver [912dcca79261c8be88ad1d62cf21a9266e5708783b6ebf67a7d99f682ec36b36] <==
	I0414 11:08:22.823456       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0414 11:08:22.824030       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0414 11:08:22.880547       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0414 11:08:22.880606       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0414 11:08:22.880630       1 shared_informer.go:320] Caches are synced for configmaps
	I0414 11:08:22.882173       1 cache.go:39] Caches are synced for autoregister controller
	I0414 11:08:22.885314       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0414 11:08:22.890225       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0414 11:08:23.004376       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0414 11:08:23.640124       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0414 11:08:24.341984       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0414 11:08:24.435852       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0414 11:08:24.489635       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0414 11:08:24.495941       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0414 11:08:26.012491       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0414 11:08:26.262407       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0414 11:08:26.362556       1 controller.go:615] quota admission added evaluator for: endpoints
	I0414 11:08:44.549689       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.253.99"}
	I0414 11:08:48.621661       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.253.128"}
	I0414 11:08:48.802857       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.64.165"}
	I0414 11:08:50.590217       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.194.16"}
	I0414 11:09:02.794794       1 controller.go:615] quota admission added evaluator for: namespaces
	I0414 11:09:03.089066       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.208.31"}
	I0414 11:09:03.115802       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.20.32"}
	I0414 11:09:03.819430       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.30.131"}
	
	
	==> kube-controller-manager [25110bcc33d0a83994b71ce941dd8646b65c8d632cf330ec6ff317f23c10173e] <==
	E0414 11:09:02.922953       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0414 11:09:02.990588       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="62.367412ms"
	I0414 11:09:02.996936       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="15.741292ms"
	I0414 11:09:03.001304       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="10.653622ms"
	I0414 11:09:03.001551       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="52.847µs"
	I0414 11:09:03.004774       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="7.779713ms"
	I0414 11:09:03.004866       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="54.352µs"
	I0414 11:09:03.081148       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="87.358µs"
	I0414 11:09:24.196366       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-397992"
	I0414 11:09:25.186790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="76.917µs"
	I0414 11:09:28.202436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="6.62128ms"
	I0414 11:09:28.202552       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="66.559µs"
	I0414 11:09:29.207869       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="6.515616ms"
	I0414 11:09:29.207999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="87.734µs"
	I0414 11:09:39.905061       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="91.662µs"
	I0414 11:09:54.591466       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-397992"
	I0414 11:11:25.904930       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="94.03µs"
	I0414 11:11:38.904834       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="74.063µs"
	I0414 11:13:24.904101       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="138.984µs"
	I0414 11:13:35.906175       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="96.721µs"
	I0414 11:15:04.906341       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="133.455µs"
	I0414 11:15:16.905138       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="141.874µs"
	I0414 11:16:33.145089       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-397992"
	I0414 11:17:07.906206       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="115.369µs"
	I0414 11:17:21.903934       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="123.885µs"
	
	
	==> kube-controller-manager [572405034f047e9d4d9120c834fa2df1d2d53665d6984cffb7176e16882eff84] <==
	I0414 11:07:50.339638       1 shared_informer.go:320] Caches are synced for endpoint
	I0414 11:07:50.339676       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0414 11:07:50.339787       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0414 11:07:50.342725       1 shared_informer.go:320] Caches are synced for resource quota
	I0414 11:07:50.343603       1 shared_informer.go:320] Caches are synced for node
	I0414 11:07:50.343656       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0414 11:07:50.343693       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0414 11:07:50.343704       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0414 11:07:50.343711       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0414 11:07:50.343780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-397992"
	I0414 11:07:50.346099       1 shared_informer.go:320] Caches are synced for persistent volume
	I0414 11:07:50.355437       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 11:07:50.361655       1 shared_informer.go:320] Caches are synced for resource quota
	I0414 11:07:50.367971       1 shared_informer.go:320] Caches are synced for taint
	I0414 11:07:50.368102       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0414 11:07:50.368200       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-397992"
	I0414 11:07:50.368242       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0414 11:07:50.372440       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0414 11:07:50.389493       1 shared_informer.go:320] Caches are synced for garbage collector
	I0414 11:07:50.389523       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0414 11:07:50.389537       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0414 11:07:50.598709       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="282.994498ms"
	I0414 11:07:50.598817       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="61.288µs"
	I0414 11:07:54.288011       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-397992"
	I0414 11:08:04.451439       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-397992"
	
	
	==> kube-proxy [3f6463700e2d2610638c2a3c7b8f4361f0ec6355ad9dd9035f9edb8a99977bb0] <==
	I0414 11:08:23.411613       1 server_linux.go:66] "Using iptables proxy"
	I0414 11:08:23.533227       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0414 11:08:23.533292       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 11:08:23.553588       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0414 11:08:23.553656       1 server_linux.go:170] "Using iptables Proxier"
	I0414 11:08:23.555711       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 11:08:23.556014       1 server.go:497] "Version info" version="v1.32.2"
	I0414 11:08:23.556059       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 11:08:23.557758       1 config.go:199] "Starting service config controller"
	I0414 11:08:23.557772       1 config.go:105] "Starting endpoint slice config controller"
	I0414 11:08:23.557817       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 11:08:23.557816       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 11:08:23.557864       1 config.go:329] "Starting node config controller"
	I0414 11:08:23.557873       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 11:08:23.658275       1 shared_informer.go:320] Caches are synced for node config
	I0414 11:08:23.658304       1 shared_informer.go:320] Caches are synced for service config
	I0414 11:08:23.658315       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [bf3202a776e066d2517485461c06e2439e57972ce8a48ac524bfbaa157e497da] <==
	I0414 11:07:55.306590       1 server_linux.go:66] "Using iptables proxy"
	I0414 11:07:55.413878       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0414 11:07:55.413960       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0414 11:07:55.438626       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0414 11:07:55.438699       1 server_linux.go:170] "Using iptables Proxier"
	I0414 11:07:55.440789       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0414 11:07:55.441235       1 server.go:497] "Version info" version="v1.32.2"
	I0414 11:07:55.441280       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 11:07:55.442574       1 config.go:105] "Starting endpoint slice config controller"
	I0414 11:07:55.442625       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0414 11:07:55.442631       1 config.go:199] "Starting service config controller"
	I0414 11:07:55.442663       1 config.go:329] "Starting node config controller"
	I0414 11:07:55.442661       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0414 11:07:55.442682       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0414 11:07:55.543129       1 shared_informer.go:320] Caches are synced for service config
	I0414 11:07:55.543257       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0414 11:07:55.543286       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1c75dd3a3a07d701062c6aef9ecbbe7f7deb8b5aaa95718eced9854a52571030] <==
	I0414 11:07:45.238502       1 serving.go:386] Generated self-signed cert in-memory
	W0414 11:07:47.086107       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 11:07:47.086220       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 11:07:47.086260       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 11:07:47.086295       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 11:07:47.191133       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 11:07:47.191181       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 11:07:47.193985       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 11:07:47.194041       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 11:07:47.194217       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 11:07:47.194316       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 11:07:47.294546       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 11:08:05.916472       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0414 11:08:05.916559       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0414 11:08:05.916655       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0414 11:08:05.916977       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [874281c757752d105a8556805ff79aed622d9f254ebe06a2cedb769c5325806d] <==
	I0414 11:08:20.349147       1 serving.go:386] Generated self-signed cert in-memory
	W0414 11:08:22.700652       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0414 11:08:22.700805       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0414 11:08:22.700872       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0414 11:08:22.700921       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0414 11:08:22.797113       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0414 11:08:22.797147       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0414 11:08:22.799578       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0414 11:08:22.799939       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0414 11:08:22.799966       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0414 11:08:22.799996       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0414 11:08:22.900441       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 14 11:18:19 functional-397992 kubelet[5878]: E0414 11:18:19.031040    5878 manager.go:1116] Failed to create existing container: /crio-0a2c0aabda440ca384923de612c6b1fffbc9d071522581f6804778803749e14e: Error finding container 0a2c0aabda440ca384923de612c6b1fffbc9d071522581f6804778803749e14e: Status 404 returned error can't find the container with id 0a2c0aabda440ca384923de612c6b1fffbc9d071522581f6804778803749e14e
	Apr 14 11:18:19 functional-397992 kubelet[5878]: E0414 11:18:19.031251    5878 manager.go:1116] Failed to create existing container: /docker/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/crio-1ab5b78f6af4dc27bd90e1e2deb1292c1128834fe16d77a8d5f6ac06c33aab10: Error finding container 1ab5b78f6af4dc27bd90e1e2deb1292c1128834fe16d77a8d5f6ac06c33aab10: Status 404 returned error can't find the container with id 1ab5b78f6af4dc27bd90e1e2deb1292c1128834fe16d77a8d5f6ac06c33aab10
	Apr 14 11:18:19 functional-397992 kubelet[5878]: E0414 11:18:19.031429    5878 manager.go:1116] Failed to create existing container: /crio-a69c45b4c51b83ae12055a04706388c6c98137d1a0a2fe40e7a74b33cc163df3: Error finding container a69c45b4c51b83ae12055a04706388c6c98137d1a0a2fe40e7a74b33cc163df3: Status 404 returned error can't find the container with id a69c45b4c51b83ae12055a04706388c6c98137d1a0a2fe40e7a74b33cc163df3
	Apr 14 11:18:19 functional-397992 kubelet[5878]: E0414 11:18:19.031689    5878 manager.go:1116] Failed to create existing container: /crio-573a02ae8a80fbabc6620086e295e793cfde16ecc574964534a698029d63ff50: Error finding container 573a02ae8a80fbabc6620086e295e793cfde16ecc574964534a698029d63ff50: Status 404 returned error can't find the container with id 573a02ae8a80fbabc6620086e295e793cfde16ecc574964534a698029d63ff50
	Apr 14 11:18:19 functional-397992 kubelet[5878]: E0414 11:18:19.031958    5878 manager.go:1116] Failed to create existing container: /docker/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/crio-0a2c0aabda440ca384923de612c6b1fffbc9d071522581f6804778803749e14e: Error finding container 0a2c0aabda440ca384923de612c6b1fffbc9d071522581f6804778803749e14e: Status 404 returned error can't find the container with id 0a2c0aabda440ca384923de612c6b1fffbc9d071522581f6804778803749e14e
	Apr 14 11:18:19 functional-397992 kubelet[5878]: E0414 11:18:19.032176    5878 manager.go:1116] Failed to create existing container: /docker/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/crio-e931f99ffc5301150797ef17bfd8eb342a7dd155e9d16daec933ac6334db2a51: Error finding container e931f99ffc5301150797ef17bfd8eb342a7dd155e9d16daec933ac6334db2a51: Status 404 returned error can't find the container with id e931f99ffc5301150797ef17bfd8eb342a7dd155e9d16daec933ac6334db2a51
	Apr 14 11:18:19 functional-397992 kubelet[5878]: E0414 11:18:19.032371    5878 manager.go:1116] Failed to create existing container: /docker/ee9515e3608b12288a31343abe7b1ab5313320dfddf5309debab96c4635366cc/crio-a6ec53a54873c74f8a4b9476ce79176d901bd78d63f74b57eadcac1b2ea72d95: Error finding container a6ec53a54873c74f8a4b9476ce79176d901bd78d63f74b57eadcac1b2ea72d95: Status 404 returned error can't find the container with id a6ec53a54873c74f8a4b9476ce79176d901bd78d63f74b57eadcac1b2ea72d95
	Apr 14 11:18:19 functional-397992 kubelet[5878]: E0414 11:18:19.032606    5878 manager.go:1116] Failed to create existing container: /crio-de9adb7129e83ed839e12e4b054c645c9706c90a291bd0fbe047100331910095: Error finding container de9adb7129e83ed839e12e4b054c645c9706c90a291bd0fbe047100331910095: Status 404 returned error can't find the container with id de9adb7129e83ed839e12e4b054c645c9706c90a291bd0fbe047100331910095
	Apr 14 11:18:19 functional-397992 kubelet[5878]: E0414 11:18:19.116316    5878 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629499116156025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:18:19 functional-397992 kubelet[5878]: E0414 11:18:19.116349    5878 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629499116156025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:18:19 functional-397992 kubelet[5878]: E0414 11:18:19.895574    5878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="4db27b76-4527-43c2-86b0-d2afd06af2d2"
	Apr 14 11:18:20 functional-397992 kubelet[5878]: E0414 11:18:20.895194    5878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="237f72df-ab98-4bb0-85de-8ad55e36802d"
	Apr 14 11:18:22 functional-397992 kubelet[5878]: E0414 11:18:22.895983    5878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-t7srs" podUID="f5cb57c1-b791-4cef-82c2-c394d95380d1"
	Apr 14 11:18:29 functional-397992 kubelet[5878]: E0414 11:18:29.117798    5878 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629509117598554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:18:29 functional-397992 kubelet[5878]: E0414 11:18:29.117842    5878 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629509117598554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:18:32 functional-397992 kubelet[5878]: E0414 11:18:32.895064    5878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="237f72df-ab98-4bb0-85de-8ad55e36802d"
	Apr 14 11:18:33 functional-397992 kubelet[5878]: E0414 11:18:33.895141    5878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="4db27b76-4527-43c2-86b0-d2afd06af2d2"
	Apr 14 11:18:34 functional-397992 kubelet[5878]: E0414 11:18:34.895104    5878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-t7srs" podUID="f5cb57c1-b791-4cef-82c2-c394d95380d1"
	Apr 14 11:18:39 functional-397992 kubelet[5878]: E0414 11:18:39.119246    5878 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629519119072579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:18:39 functional-397992 kubelet[5878]: E0414 11:18:39.119285    5878 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629519119072579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:18:45 functional-397992 kubelet[5878]: E0414 11:18:45.894040    5878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="237f72df-ab98-4bb0-85de-8ad55e36802d"
	Apr 14 11:18:46 functional-397992 kubelet[5878]: E0414 11:18:46.895356    5878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="4db27b76-4527-43c2-86b0-d2afd06af2d2"
	Apr 14 11:18:47 functional-397992 kubelet[5878]: E0414 11:18:47.895731    5878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-t7srs" podUID="f5cb57c1-b791-4cef-82c2-c394d95380d1"
	Apr 14 11:18:49 functional-397992 kubelet[5878]: E0414 11:18:49.120860    5878 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629529120629363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 14 11:18:49 functional-397992 kubelet[5878]: E0414 11:18:49.120907    5878 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744629529120629363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236037,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [d2509f8cecf0c37199f1cf29d0d713b80ae5ea20e074b8ee87731c870ff3c3f7] <==
	2025/04/14 11:09:27 Starting overwatch
	2025/04/14 11:09:27 Using namespace: kubernetes-dashboard
	2025/04/14 11:09:27 Using in-cluster config to connect to apiserver
	2025/04/14 11:09:27 Using secret token for csrf signing
	2025/04/14 11:09:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/04/14 11:09:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/04/14 11:09:27 Successful initial request to the apiserver, version: v1.32.2
	2025/04/14 11:09:27 Generating JWE encryption key
	2025/04/14 11:09:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/04/14 11:09:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/04/14 11:09:28 Initializing JWE encryption key from synchronized object
	2025/04/14 11:09:28 Creating in-cluster Sidecar client
	2025/04/14 11:09:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/04/14 11:09:28 Serving insecurely on HTTP port: 9090
	2025/04/14 11:09:58 Successful request to sidecar
	
	
	==> storage-provisioner [786146c75553e901b663f3fceda808551485ff5e7835b1af93cfdeb1332ac6d1] <==
	I0414 11:08:23.326586       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0414 11:08:23.389943       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0414 11:08:23.390066       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0414 11:08:40.787414       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0414 11:08:40.787516       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8b113c78-2c19-496b-a7b4-656c6c0d4710", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-397992_eed95be4-8492-4650-b357-7e38ffe7a486 became leader
	I0414 11:08:40.787731       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-397992_eed95be4-8492-4650-b357-7e38ffe7a486!
	I0414 11:08:40.887945       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-397992_eed95be4-8492-4650-b357-7e38ffe7a486!
	I0414 11:09:09.318859       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0414 11:09:09.319075       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"17ae6257-898d-43ac-b0fe-270ee6ac66d7", APIVersion:"v1", ResourceVersion:"876", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0414 11:09:09.318941       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    883ba934-0cc8-45ea-9114-61609882a8d5 383 0 2025-04-14 11:07:08 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-04-14 11:07:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-17ae6257-898d-43ac-b0fe-270ee6ac66d7 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  17ae6257-898d-43ac-b0fe-270ee6ac66d7 876 0 2025-04-14 11:09:09 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-04-14 11:09:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-04-14 11:09:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0414 11:09:09.319430       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-17ae6257-898d-43ac-b0fe-270ee6ac66d7" provisioned
	I0414 11:09:09.319458       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0414 11:09:09.319465       1 volume_store.go:212] Trying to save persistentvolume "pvc-17ae6257-898d-43ac-b0fe-270ee6ac66d7"
	I0414 11:09:09.327881       1 volume_store.go:219] persistentvolume "pvc-17ae6257-898d-43ac-b0fe-270ee6ac66d7" saved
	I0414 11:09:09.328035       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"17ae6257-898d-43ac-b0fe-270ee6ac66d7", APIVersion:"v1", ResourceVersion:"876", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-17ae6257-898d-43ac-b0fe-270ee6ac66d7
	
	
	==> storage-provisioner [fbaeb710ff28b84bb069df504dfd868d137e50e5e20e930cdcd2a87990c17a25] <==
	I0414 11:08:07.291260       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0414 11:08:07.292793       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-397992 -n functional-397992
helpers_test.go:261: (dbg) Run:  kubectl --context functional-397992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-t7srs nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-397992 describe pod busybox-mount mysql-58ccfd96bb-t7srs nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-397992 describe pod busybox-mount mysql-58ccfd96bb-t7srs nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-397992/192.168.49.2
	Start Time:       Mon, 14 Apr 2025 11:08:50 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://f9b84f5e2f8adb7ac4d158c3801e825e32ecb0b0151a87151537390a6f08908e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 14 Apr 2025 11:08:54 +0000
	      Finished:     Mon, 14 Apr 2025 11:08:54 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7t24w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-7t24w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  10m    default-scheduler  Successfully assigned default/busybox-mount to functional-397992
	  Normal  Pulling    10m    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.156s (2.334s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m59s  kubelet            Created container: mount-munger
	  Normal  Started    9m59s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-t7srs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-397992/192.168.49.2
	Start Time:       Mon, 14 Apr 2025 11:08:50 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z28n6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z28n6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-58ccfd96bb-t7srs to functional-397992
	  Warning  Failed     9m29s                  kubelet            Failed to pull image "docker.io/mysql:5.7": initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m40s (x2 over 7m40s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m35s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     118s (x5 over 9m29s)   kubelet            Error: ErrImagePull
	  Warning  Failed     118s (x2 over 4m3s)    kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     58s (x16 over 9m28s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    6s (x20 over 9m28s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-397992/192.168.49.2
	Start Time:       Mon, 14 Apr 2025 11:09:03 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zgb4r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zgb4r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  9m49s                 default-scheduler  Successfully assigned default/nginx-svc to functional-397992
	  Normal   Pulling    2m9s (x5 over 9m49s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     88s (x5 over 8m54s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     88s (x5 over 8m54s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    7s (x16 over 8m54s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7s (x16 over 8m54s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-397992/192.168.49.2
	Start Time:       Mon, 14 Apr 2025 11:09:09 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xlhql (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-xlhql:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m43s                  default-scheduler  Successfully assigned default/sp-pod to functional-397992
	  Warning  Failed     6m10s (x2 over 8m10s)  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:b6653fca400812e81569f9be762ae315db685bc30b12ddcdc8616c63a227d3ca in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    99s (x5 over 9m44s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     58s (x5 over 8m10s)    kubelet            Error: ErrImagePull
	  Warning  Failed     58s (x3 over 4m39s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    8s (x14 over 8m10s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     8s (x14 over 8m10s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (603.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Non-zero exit: docker pull kicbase/echo-server:1.0: exit status 1 (151.199243ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:361: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 image load --daemon kicbase/echo-server:functional-397992 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 image ls
functional_test.go:463: expected "kicbase/echo-server:functional-397992" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 image load --daemon kicbase/echo-server:functional-397992 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 image ls
functional_test.go:463: expected "kicbase/echo-server:functional-397992" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:252: (dbg) Non-zero exit: docker pull kicbase/echo-server:latest: exit status 1 (152.552742ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:254: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 image save kicbase/echo-server:functional-397992 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:403: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:428: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0414 11:09:02.893426 1803699 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:09:02.893619 1803699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:09:02.893637 1803699 out.go:358] Setting ErrFile to fd 2...
	I0414 11:09:02.893648 1803699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:09:02.894066 1803699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
	I0414 11:09:02.895264 1803699 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:09:02.895457 1803699 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:09:02.896433 1803699 cli_runner.go:164] Run: docker container inspect functional-397992 --format={{.State.Status}}
	I0414 11:09:02.924838 1803699 ssh_runner.go:195] Run: systemctl --version
	I0414 11:09:02.924916 1803699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-397992
	I0414 11:09:02.946402 1803699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/functional-397992/id_rsa Username:docker}
	I0414 11:09:03.086071 1803699 cache_images.go:289] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W0414 11:09:03.086155 1803699 cache_images.go:253] Failed to load cached images for "functional-397992": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I0414 11:09:03.086201 1803699 cache_images.go:265] failed pushing to: functional-397992

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-397992
functional_test.go:436: (dbg) Non-zero exit: docker rmi kicbase/echo-server:functional-397992: exit status 1 (24.352377ms)

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-397992

                                                
                                                
** /stderr **
functional_test.go:438: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-397992

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-397992 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4db27b76-4527-43c2-86b0-d2afd06af2d2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0414 11:09:05.632742 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-397992 -n functional-397992
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-04-14 11:13:04.134806918 +0000 UTC m=+757.303194770
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-397992 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-397992 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-397992/192.168.49.2
Start Time:       Mon, 14 Apr 2025 11:09:03 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zgb4r (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zgb4r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  4m                  default-scheduler  Successfully assigned default/nginx-svc to functional-397992
Warning  Failed     81s (x2 over 3m5s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     81s (x2 over 3m5s)  kubelet            Error: ErrImagePull
Normal   BackOff    69s (x2 over 3m5s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     69s (x2 over 3m5s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    54s (x3 over 4m)    kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-397992 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-397992 logs nginx-svc -n default: exit status 1 (67.067354ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-397992 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (113.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0414 11:13:04.268889 1763595 retry.go:31] will retry after 4.412114376s: Temporary Error: Get "http:": http: no Host in request URL
I0414 11:13:08.681340 1763595 retry.go:31] will retry after 6.643372177s: Temporary Error: Get "http:": http: no Host in request URL
E0414 11:13:11.397394 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
I0414 11:13:15.325619 1763595 retry.go:31] will retry after 4.970401814s: Temporary Error: Get "http:": http: no Host in request URL
I0414 11:13:20.296276 1763595 retry.go:31] will retry after 11.342041537s: Temporary Error: Get "http:": http: no Host in request URL
I0414 11:13:31.638726 1763595 retry.go:31] will retry after 19.894633907s: Temporary Error: Get "http:": http: no Host in request URL
I0414 11:13:51.533898 1763595 retry.go:31] will retry after 23.364915489s: Temporary Error: Get "http:": http: no Host in request URL
I0414 11:14:14.899882 1763595 retry.go:31] will retry after 42.516379789s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-397992 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
nginx-svc   LoadBalancer   10.96.30.131   10.96.30.131   80:32503/TCP   5m54s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (113.21s)

                                                
                                    

Test pass (291/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.02
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.32.2/json-events 4.45
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.07
18 TestDownloadOnly/v1.32.2/DeleteAll 0.23
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.17
21 TestBinaryMirror 0.8
22 TestOffline 56.17
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 123.52
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 9.49
35 TestAddons/parallel/Registry 15.53
37 TestAddons/parallel/InspektorGadget 10.86
38 TestAddons/parallel/MetricsServer 5.68
40 TestAddons/parallel/CSI 51.26
41 TestAddons/parallel/Headlamp 16.55
42 TestAddons/parallel/CloudSpanner 5.49
43 TestAddons/parallel/LocalPath 8.18
44 TestAddons/parallel/NvidiaDevicePlugin 6.47
45 TestAddons/parallel/Yakd 11.77
46 TestAddons/parallel/AmdGpuDevicePlugin 6.49
47 TestAddons/StoppedEnableDisable 12.17
48 TestCertOptions 29.94
49 TestCertExpiration 244.05
51 TestForceSystemdFlag 29.35
52 TestForceSystemdEnv 37.5
54 TestKVMDriverInstallOrUpdate 1.38
58 TestErrorSpam/setup 23.57
59 TestErrorSpam/start 0.62
60 TestErrorSpam/status 0.88
61 TestErrorSpam/pause 1.57
62 TestErrorSpam/unpause 1.62
63 TestErrorSpam/stop 1.39
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 39.64
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 33.92
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.14
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.86
75 TestFunctional/serial/CacheCmd/cache/add_local 0.99
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.75
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 36.95
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.44
86 TestFunctional/serial/LogsFileCmd 1.48
87 TestFunctional/serial/InvalidService 4
89 TestFunctional/parallel/ConfigCmd 0.46
90 TestFunctional/parallel/DashboardCmd 35.58
91 TestFunctional/parallel/DryRun 0.44
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.04
97 TestFunctional/parallel/ServiceCmdConnect 9.62
98 TestFunctional/parallel/AddonsCmd 0.26
101 TestFunctional/parallel/SSHCmd 0.56
102 TestFunctional/parallel/CpCmd 1.84
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.82
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
113 TestFunctional/parallel/License 0.18
114 TestFunctional/parallel/MountCmd/any-port 8.16
115 TestFunctional/parallel/ServiceCmd/DeployApp 9.17
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
117 TestFunctional/parallel/ProfileCmd/profile_list 0.5
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.56
119 TestFunctional/parallel/MountCmd/specific-port 1.91
120 TestFunctional/parallel/ServiceCmd/List 0.52
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
125 TestFunctional/parallel/MountCmd/VerifyCleanup 1.58
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
127 TestFunctional/parallel/ServiceCmd/Format 0.38
128 TestFunctional/parallel/ServiceCmd/URL 0.39
129 TestFunctional/parallel/Version/short 0.05
130 TestFunctional/parallel/Version/components 0.48
131 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
132 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
133 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
134 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
135 TestFunctional/parallel/ImageCommands/ImageBuild 2.67
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.62
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 103.76
162 TestMultiControlPlane/serial/DeployApp 5.59
163 TestMultiControlPlane/serial/PingHostFromPods 1.12
164 TestMultiControlPlane/serial/AddWorkerNode 35.54
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
167 TestMultiControlPlane/serial/CopyFile 16.29
168 TestMultiControlPlane/serial/StopSecondaryNode 12.58
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
170 TestMultiControlPlane/serial/RestartSecondaryNode 22.77
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.05
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 173.2
173 TestMultiControlPlane/serial/DeleteSecondaryNode 11.47
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
175 TestMultiControlPlane/serial/StopCluster 35.77
176 TestMultiControlPlane/serial/RestartCluster 100.33
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
178 TestMultiControlPlane/serial/AddSecondaryNode 37.93
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
183 TestJSONOutput/start/Command 40.68
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.69
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.62
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.83
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
208 TestKicCustomNetwork/create_custom_network 29.05
209 TestKicCustomNetwork/use_default_bridge_network 23.75
210 TestKicExistingNetwork 24.18
211 TestKicCustomSubnet 27.74
212 TestKicStaticIP 25.52
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 48.97
217 TestMountStart/serial/StartWithMountFirst 8.09
218 TestMountStart/serial/VerifyMountFirst 0.25
219 TestMountStart/serial/StartWithMountSecond 5.26
220 TestMountStart/serial/VerifyMountSecond 0.26
221 TestMountStart/serial/DeleteFirst 1.64
222 TestMountStart/serial/VerifyMountPostDelete 0.25
223 TestMountStart/serial/Stop 1.18
224 TestMountStart/serial/RestartStopped 7.12
225 TestMountStart/serial/VerifyMountPostStop 0.25
228 TestMultiNode/serial/FreshStart2Nodes 69.1
229 TestMultiNode/serial/DeployApp2Nodes 6.57
230 TestMultiNode/serial/PingHostFrom2Pods 0.77
231 TestMultiNode/serial/AddNode 28.74
232 TestMultiNode/serial/MultiNodeLabels 0.07
233 TestMultiNode/serial/ProfileList 0.63
234 TestMultiNode/serial/CopyFile 9.27
235 TestMultiNode/serial/StopNode 2.13
236 TestMultiNode/serial/StartAfterStop 9.07
237 TestMultiNode/serial/RestartKeepsNodes 87.78
238 TestMultiNode/serial/DeleteNode 5.01
239 TestMultiNode/serial/StopMultiNode 23.81
240 TestMultiNode/serial/RestartMultiNode 45.66
241 TestMultiNode/serial/ValidateNameConflict 27.51
246 TestPreload 104.1
248 TestScheduledStopUnix 98.15
251 TestInsufficientStorage 13.09
252 TestRunningBinaryUpgrade 60.52
254 TestKubernetesUpgrade 353.63
255 TestMissingContainerUpgrade 142.27
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
258 TestNoKubernetes/serial/StartWithK8s 34.81
259 TestNoKubernetes/serial/StartWithStopK8s 19.86
260 TestNoKubernetes/serial/Start 5.48
268 TestNetworkPlugins/group/false 4.05
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
270 TestNoKubernetes/serial/ProfileList 1.82
271 TestNoKubernetes/serial/Stop 1.25
272 TestNoKubernetes/serial/StartNoArgs 8.81
276 TestStoppedBinaryUpgrade/Setup 0.36
277 TestStoppedBinaryUpgrade/Upgrade 102.43
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
288 TestPause/serial/Start 48.67
289 TestNetworkPlugins/group/auto/Start 42.76
290 TestPause/serial/SecondStartNoReconfiguration 18.37
291 TestNetworkPlugins/group/kindnet/Start 40.6
292 TestPause/serial/Pause 0.81
293 TestPause/serial/VerifyStatus 0.32
294 TestPause/serial/Unpause 0.75
295 TestPause/serial/PauseAgain 0.8
296 TestPause/serial/DeletePaused 2.78
297 TestPause/serial/VerifyDeletedResources 0.85
298 TestNetworkPlugins/group/calico/Start 56.71
299 TestNetworkPlugins/group/auto/KubeletFlags 0.29
300 TestNetworkPlugins/group/auto/NetCatPod 11.24
301 TestNetworkPlugins/group/auto/DNS 0.14
302 TestNetworkPlugins/group/auto/Localhost 0.12
303 TestNetworkPlugins/group/auto/HairPin 0.13
304 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
305 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
306 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
307 TestNetworkPlugins/group/custom-flannel/Start 50.4
308 TestNetworkPlugins/group/kindnet/DNS 0.15
309 TestNetworkPlugins/group/kindnet/Localhost 0.13
310 TestNetworkPlugins/group/kindnet/HairPin 0.11
311 TestNetworkPlugins/group/calico/ControllerPod 6.01
312 TestNetworkPlugins/group/enable-default-cni/Start 38.05
313 TestNetworkPlugins/group/calico/KubeletFlags 0.36
314 TestNetworkPlugins/group/calico/NetCatPod 10.37
315 TestNetworkPlugins/group/calico/DNS 0.14
316 TestNetworkPlugins/group/calico/Localhost 0.13
317 TestNetworkPlugins/group/calico/HairPin 0.13
318 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
319 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.21
320 TestNetworkPlugins/group/flannel/Start 48.89
321 TestNetworkPlugins/group/custom-flannel/DNS 0.16
322 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
323 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
324 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
325 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.23
326 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
327 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
328 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
329 TestNetworkPlugins/group/bridge/Start 41.12
331 TestStartStop/group/old-k8s-version/serial/FirstStart 143.05
332 TestNetworkPlugins/group/flannel/ControllerPod 6.01
334 TestStartStop/group/no-preload/serial/FirstStart 55.77
335 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
336 TestNetworkPlugins/group/flannel/NetCatPod 12.22
337 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
338 TestNetworkPlugins/group/bridge/NetCatPod 10.27
339 TestNetworkPlugins/group/flannel/DNS 0.16
340 TestNetworkPlugins/group/flannel/Localhost 0.14
341 TestNetworkPlugins/group/flannel/HairPin 0.17
342 TestNetworkPlugins/group/bridge/DNS 0.15
343 TestNetworkPlugins/group/bridge/Localhost 0.13
344 TestNetworkPlugins/group/bridge/HairPin 0.13
346 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.21
348 TestStartStop/group/newest-cni/serial/FirstStart 28.06
349 TestStartStop/group/no-preload/serial/DeployApp 9.33
350 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
351 TestStartStop/group/no-preload/serial/Stop 12.29
352 TestStartStop/group/newest-cni/serial/DeployApp 0
353 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.22
354 TestStartStop/group/newest-cni/serial/Stop 1.25
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
356 TestStartStop/group/no-preload/serial/SecondStart 278.7
357 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
358 TestStartStop/group/newest-cni/serial/SecondStart 13.5
359 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.4
360 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
361 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
362 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
363 TestStartStop/group/newest-cni/serial/Pause 3.05
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
365 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
367 TestStartStop/group/embed-certs/serial/FirstStart 47.68
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
369 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 297.71
370 TestStartStop/group/old-k8s-version/serial/DeployApp 9.43
371 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.91
372 TestStartStop/group/old-k8s-version/serial/Stop 11.98
373 TestStartStop/group/embed-certs/serial/DeployApp 10.25
374 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
375 TestStartStop/group/old-k8s-version/serial/SecondStart 124.68
376 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
377 TestStartStop/group/embed-certs/serial/Stop 14.61
378 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
379 TestStartStop/group/embed-certs/serial/SecondStart 301.5
380 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
381 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
382 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
383 TestStartStop/group/old-k8s-version/serial/Pause 2.69
384 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
386 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
387 TestStartStop/group/no-preload/serial/Pause 2.74
388 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
389 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
390 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
391 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.67
392 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
393 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
394 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
395 TestStartStop/group/embed-certs/serial/Pause 2.67
x
+
TestDownloadOnly/v1.20.0/json-events (5.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-076834 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-076834 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.016299759s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0414 11:00:31.891885 1763595 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0414 11:00:31.892007 1763595 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-1756784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-076834
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-076834: exit status 85 (71.001469ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-076834 | jenkins | v1.35.0 | 14 Apr 25 11:00 UTC |          |
	|         | -p download-only-076834        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 11:00:26
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 11:00:26.921793 1763607 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:00:26.922083 1763607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:00:26.922095 1763607 out.go:358] Setting ErrFile to fd 2...
	I0414 11:00:26.922099 1763607 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:00:26.922289 1763607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
	W0414 11:00:26.922443 1763607 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20534-1756784/.minikube/config/config.json: open /home/jenkins/minikube-integration/20534-1756784/.minikube/config/config.json: no such file or directory
	I0414 11:00:26.923080 1763607 out.go:352] Setting JSON to true
	I0414 11:00:26.924048 1763607 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":157375,"bootTime":1744471052,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:00:26.924172 1763607 start.go:139] virtualization: kvm guest
	I0414 11:00:26.926951 1763607 out.go:97] [download-only-076834] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0414 11:00:26.927109 1763607 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20534-1756784/.minikube/cache/preloaded-tarball: no such file or directory
	I0414 11:00:26.927153 1763607 notify.go:220] Checking for updates...
	I0414 11:00:26.928873 1763607 out.go:169] MINIKUBE_LOCATION=20534
	I0414 11:00:26.930668 1763607 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:00:26.932690 1763607 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig
	I0414 11:00:26.934489 1763607 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube
	I0414 11:00:26.936252 1763607 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0414 11:00:26.939580 1763607 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0414 11:00:26.939918 1763607 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:00:26.963074 1763607 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0414 11:00:26.963165 1763607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:00:27.484708 1763607 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-14 11:00:27.473676096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:00:27.484845 1763607 docker.go:318] overlay module found
	I0414 11:00:27.486774 1763607 out.go:97] Using the docker driver based on user configuration
	I0414 11:00:27.486829 1763607 start.go:297] selected driver: docker
	I0414 11:00:27.486836 1763607 start.go:901] validating driver "docker" against <nil>
	I0414 11:00:27.486942 1763607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:00:27.544587 1763607 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-14 11:00:27.533978398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:00:27.544783 1763607 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 11:00:27.545355 1763607 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0414 11:00:27.545554 1763607 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 11:00:27.547559 1763607 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-076834 host does not exist
	  To start a cluster, run: "minikube start -p download-only-076834"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-076834
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (4.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-065128 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-065128 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.451718096s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (4.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0414 11:00:36.789180 1763595 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0414 11:00:36.789222 1763595 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-1756784/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-065128
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-065128: exit status 85 (68.57365ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-076834 | jenkins | v1.35.0 | 14 Apr 25 11:00 UTC |                     |
	|         | -p download-only-076834        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 14 Apr 25 11:00 UTC | 14 Apr 25 11:00 UTC |
	| delete  | -p download-only-076834        | download-only-076834 | jenkins | v1.35.0 | 14 Apr 25 11:00 UTC | 14 Apr 25 11:00 UTC |
	| start   | -o=json --download-only        | download-only-065128 | jenkins | v1.35.0 | 14 Apr 25 11:00 UTC |                     |
	|         | -p download-only-065128        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/14 11:00:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0414 11:00:32.381723 1763944 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:00:32.381824 1763944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:00:32.381829 1763944 out.go:358] Setting ErrFile to fd 2...
	I0414 11:00:32.381833 1763944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:00:32.382036 1763944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
	I0414 11:00:32.382606 1763944 out.go:352] Setting JSON to true
	I0414 11:00:32.383444 1763944 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":157380,"bootTime":1744471052,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:00:32.383512 1763944 start.go:139] virtualization: kvm guest
	I0414 11:00:32.385576 1763944 out.go:97] [download-only-065128] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 11:00:32.385747 1763944 notify.go:220] Checking for updates...
	I0414 11:00:32.387393 1763944 out.go:169] MINIKUBE_LOCATION=20534
	I0414 11:00:32.389313 1763944 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:00:32.390946 1763944 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig
	I0414 11:00:32.392523 1763944 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube
	I0414 11:00:32.393927 1763944 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0414 11:00:32.396398 1763944 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0414 11:00:32.396734 1763944 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:00:32.419811 1763944 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0414 11:00:32.419898 1763944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:00:32.475592 1763944 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-04-14 11:00:32.465874784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:00:32.475696 1763944 docker.go:318] overlay module found
	I0414 11:00:32.477363 1763944 out.go:97] Using the docker driver based on user configuration
	I0414 11:00:32.477392 1763944 start.go:297] selected driver: docker
	I0414 11:00:32.477404 1763944 start.go:901] validating driver "docker" against <nil>
	I0414 11:00:32.477486 1763944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:00:32.528872 1763944 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-04-14 11:00:32.520012136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:00:32.529078 1763944 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0414 11:00:32.529593 1763944 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0414 11:00:32.529736 1763944 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0414 11:00:32.531664 1763944 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-065128 host does not exist
	  To start a cluster, run: "minikube start -p download-only-065128"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-065128
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.17s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-371317 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-371317" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-371317
--- PASS: TestDownloadOnlyKic (1.17s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I0414 11:00:38.683994 1763595 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-354346 --alsologtostderr --binary-mirror http://127.0.0.1:43913 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-354346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-354346
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (56.17s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-109910 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-109910 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (53.773885289s)
helpers_test.go:175: Cleaning up "offline-crio-109910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-109910
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-109910: (2.391823046s)
--- PASS: TestOffline (56.17s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-295301
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-295301: exit status 85 (61.028615ms)

                                                
                                                
-- stdout --
	* Profile "addons-295301" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-295301"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-295301
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-295301: exit status 85 (59.436809ms)

                                                
                                                
-- stdout --
	* Profile "addons-295301" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-295301"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (123.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-295301 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-295301 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m3.518588857s)
--- PASS: TestAddons/Setup (123.52s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-295301 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-295301 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-295301 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-295301 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2899fbe4-d17b-49ae-98ac-2fcd7fca1ec1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2899fbe4-d17b-49ae-98ac-2fcd7fca1ec1] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003491924s
addons_test.go:633: (dbg) Run:  kubectl --context addons-295301 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-295301 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-295301 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.119157ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-hcqpt" [0dc9a30a-c9b5-4470-a64f-9fa51f58652d] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003008402s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-k9cnv" [87c81d3e-261b-419a-9046-7bf3e82c1778] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004457606s
addons_test.go:331: (dbg) Run:  kubectl --context addons-295301 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-295301 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-295301 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.737381449s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 ip
2025/04/14 11:03:16 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.53s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-sz5v7" [de89b99c-af4b-4f0e-9a82-a6ec474fd6ff] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.022018138s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-295301 addons disable inspektor-gadget --alsologtostderr -v=1: (5.836437925s)
--- PASS: TestAddons/parallel/InspektorGadget (10.86s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.233077ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-z4kvj" [b4c6191a-f9ed-4e2e-9c86-aabd642b2563] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003400866s
addons_test.go:402: (dbg) Run:  kubectl --context addons-295301 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0414 11:03:16.835120 1763595 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0414 11:03:16.838635 1763595 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0414 11:03:16.838674 1763595 kapi.go:107] duration metric: took 3.565884ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.578984ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-295301 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-295301 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [af1bc38d-f485-4c03-9f38-f17979f46c04] Pending
helpers_test.go:344: "task-pv-pod" [af1bc38d-f485-4c03-9f38-f17979f46c04] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [af1bc38d-f485-4c03-9f38-f17979f46c04] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004100514s
addons_test.go:511: (dbg) Run:  kubectl --context addons-295301 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-295301 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-295301 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-295301 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-295301 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-295301 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-295301 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8fff5807-007c-4e12-8fe5-c8ea010a2d3d] Pending
helpers_test.go:344: "task-pv-pod-restore" [8fff5807-007c-4e12-8fe5-c8ea010a2d3d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8fff5807-007c-4e12-8fe5-c8ea010a2d3d] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004074259s
addons_test.go:553: (dbg) Run:  kubectl --context addons-295301 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-295301 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-295301 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-295301 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.631945953s)
--- PASS: TestAddons/parallel/CSI (51.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-295301 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-nbtn6" [c1461fc9-e143-4a1d-b792-093b6329f3d1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-nbtn6" [c1461fc9-e143-4a1d-b792-093b6329f3d1] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003790725s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-295301 addons disable headlamp --alsologtostderr -v=1: (5.712888062s)
--- PASS: TestAddons/parallel/Headlamp (16.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-jmfbv" [edf8fb8a-6f26-40f3-8574-454a932dcec3] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003422192s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.18s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-295301 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-295301 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-295301 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [717e424d-625f-4703-84c2-d758a795e5b3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [717e424d-625f-4703-84c2-d758a795e5b3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [717e424d-625f-4703-84c2-d758a795e5b3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004204082s
addons_test.go:906: (dbg) Run:  kubectl --context addons-295301 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 ssh "cat /opt/local-path-provisioner/pvc-b3294b40-cb13-4826-81aa-9d006b235b14_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-295301 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-295301 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.18s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gmc4h" [6757ac50-f6ee-42cd-bea8-9399727ed2d9] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004545035s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-29f8v" [b12795c6-bfe7-4893-a5f2-65d389a247ff] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003840641s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-295301 addons disable yakd --alsologtostderr -v=1: (5.766346321s)
--- PASS: TestAddons/parallel/Yakd (11.77s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-nm4lc" [c90097ff-1058-422f-9d9c-bcc41a887e7c] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.004090531s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.17s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-295301
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-295301: (11.903787683s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-295301
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-295301
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-295301
--- PASS: TestAddons/StoppedEnableDisable (12.17s)

                                                
                                    
x
+
TestCertOptions (29.94s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-241293 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-241293 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (27.306209756s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-241293 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-241293 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-241293 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-241293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-241293
E0414 11:42:43.692746 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-241293: (1.992870678s)
--- PASS: TestCertOptions (29.94s)

                                                
                                    
x
+
TestCertExpiration (244.05s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-163378 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-163378 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (34.400777516s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-163378 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-163378 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (25.932128854s)
helpers_test.go:175: Cleaning up "cert-expiration-163378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-163378
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-163378: (3.71666656s)
--- PASS: TestCertExpiration (244.05s)

                                                
                                    
x
+
TestForceSystemdFlag (29.35s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-341031 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-341031 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.536639181s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-341031 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-341031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-341031
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-341031: (2.50823705s)
--- PASS: TestForceSystemdFlag (29.35s)

                                                
                                    
x
+
TestForceSystemdEnv (37.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-140508 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-140508 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.930888951s)
helpers_test.go:175: Cleaning up "force-systemd-env-140508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-140508
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-140508: (2.568919246s)
--- PASS: TestForceSystemdEnv (37.50s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.38s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0414 11:42:45.345826 1763595 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 11:42:45.346006 1763595 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0414 11:42:45.383942 1763595 install.go:62] docker-machine-driver-kvm2: exit status 1
W0414 11:42:45.384082 1763595 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 11:42:45.384160 1763595 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate959252135/001/docker-machine-driver-kvm2
I0414 11:42:45.542114 1763595 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate959252135/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0005ef6f8 gz:0xc0005ef780 tar:0xc0005ef730 tar.bz2:0xc0005ef740 tar.gz:0xc0005ef750 tar.xz:0xc0005ef760 tar.zst:0xc0005ef770 tbz2:0xc0005ef740 tgz:0xc0005ef750 txz:0xc0005ef760 tzst:0xc0005ef770 xz:0xc0005ef788 zip:0xc0005ef790 zst:0xc0005ef7a0] Getters:map[file:0xc00182dee0 http:0xc000d13040 https:0xc000d13090] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0414 11:42:45.542160 1763595 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate959252135/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.38s)

                                                
                                    
x
+
TestErrorSpam/setup (23.57s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-138782 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-138782 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-138782 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-138782 --driver=docker  --container-runtime=crio: (23.574111471s)
--- PASS: TestErrorSpam/setup (23.57s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 unpause
--- PASS: TestErrorSpam/unpause (1.62s)

                                                
                                    
x
+
TestErrorSpam/stop (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 stop: (1.1938082s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-138782 --log_dir /tmp/nospam-138782 stop
--- PASS: TestErrorSpam/stop (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20534-1756784/.minikube/files/etc/test/nested/copy/1763595/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397992 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-397992 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (39.63996959s)
--- PASS: TestFunctional/serial/StartWithProxy (39.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0414 11:07:23.989361 1763595 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397992 --alsologtostderr -v=8
E0414 11:07:43.691864 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:43.698368 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:43.709844 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:43.731410 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:43.772937 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:43.855156 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:44.016676 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:44.338744 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:44.981013 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:46.262633 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:48.824374 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:07:53.946149 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-397992 --alsologtostderr -v=8: (33.918982488s)
functional_test.go:680: soft start took 33.919783241s for "functional-397992" cluster.
I0414 11:07:57.908750 1763595 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (33.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-397992 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-397992 /tmp/TestFunctionalserialCacheCmdcacheadd_local1064412656/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 cache add minikube-local-cache-test:functional-397992
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 cache delete minikube-local-cache-test:functional-397992
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-397992
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397992 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (276.988288ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
E0414 11:08:04.188542 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 kubectl -- --context functional-397992 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-397992 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.95s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397992 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0414 11:08:24.670972 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-397992 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.954134538s)
functional_test.go:778: restart took 36.954271914s for "functional-397992" cluster.
I0414 11:08:41.388645 1763595 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (36.95s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-397992 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-397992 logs: (1.437248236s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 logs --file /tmp/TestFunctionalserialLogsFileCmd1926530022/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-397992 logs --file /tmp/TestFunctionalserialLogsFileCmd1926530022/001/logs.txt: (1.476414625s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-397992 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-397992
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-397992: exit status 115 (346.070962ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31107 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-397992 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397992 config get cpus: exit status 14 (109.522919ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397992 config get cpus: exit status 14 (59.391731ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (35.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-397992 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-397992 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1803795: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (35.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397992 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-397992 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (172.81669ms)

                                                
                                                
-- stdout --
	* [functional-397992] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:09:00.676257 1802052 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:09:00.678005 1802052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:09:00.678297 1802052 out.go:358] Setting ErrFile to fd 2...
	I0414 11:09:00.678319 1802052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:09:00.678638 1802052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
	I0414 11:09:00.679376 1802052 out.go:352] Setting JSON to false
	I0414 11:09:00.680662 1802052 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":157889,"bootTime":1744471052,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:09:00.680792 1802052 start.go:139] virtualization: kvm guest
	I0414 11:09:00.683034 1802052 out.go:177] * [functional-397992] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 11:09:00.685005 1802052 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 11:09:00.685016 1802052 notify.go:220] Checking for updates...
	I0414 11:09:00.687548 1802052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:09:00.688907 1802052 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig
	I0414 11:09:00.690118 1802052 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube
	I0414 11:09:00.691662 1802052 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 11:09:00.693138 1802052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 11:09:00.695191 1802052 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:09:00.695843 1802052 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:09:00.721433 1802052 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0414 11:09:00.721579 1802052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:09:00.776479 1802052 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:56 SystemTime:2025-04-14 11:09:00.765637314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:09:00.776585 1802052 docker.go:318] overlay module found
	I0414 11:09:00.778571 1802052 out.go:177] * Using the docker driver based on existing profile
	I0414 11:09:00.779895 1802052 start.go:297] selected driver: docker
	I0414 11:09:00.779921 1802052 start.go:901] validating driver "docker" against &{Name:functional-397992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-397992 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:09:00.780038 1802052 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 11:09:00.782746 1802052 out.go:201] 
	W0414 11:09:00.784295 1802052 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0414 11:09:00.785583 1802052 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397992 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397992 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-397992 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (184.567929ms)

                                                
                                                
-- stdout --
	* [functional-397992] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:09:00.168043 1801649 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:09:00.168157 1801649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:09:00.168165 1801649 out.go:358] Setting ErrFile to fd 2...
	I0414 11:09:00.168169 1801649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:09:00.168584 1801649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
	I0414 11:09:00.169370 1801649 out.go:352] Setting JSON to false
	I0414 11:09:00.170583 1801649 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":157888,"bootTime":1744471052,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:09:00.170705 1801649 start.go:139] virtualization: kvm guest
	I0414 11:09:00.173189 1801649 out.go:177] * [functional-397992] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0414 11:09:00.175655 1801649 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 11:09:00.175710 1801649 notify.go:220] Checking for updates...
	I0414 11:09:00.179768 1801649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:09:00.181913 1801649 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig
	I0414 11:09:00.184617 1801649 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube
	I0414 11:09:00.186594 1801649 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 11:09:00.188366 1801649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 11:09:00.190885 1801649 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:09:00.191679 1801649 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:09:00.220582 1801649 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0414 11:09:00.220678 1801649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:09:00.278948 1801649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-14 11:09:00.268532661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:09:00.279105 1801649 docker.go:318] overlay module found
	I0414 11:09:00.281667 1801649 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0414 11:09:00.283101 1801649 start.go:297] selected driver: docker
	I0414 11:09:00.283122 1801649 start.go:901] validating driver "docker" against &{Name:functional-397992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-397992 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0414 11:09:00.283234 1801649 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 11:09:00.285897 1801649 out.go:201] 
	W0414 11:09:00.287322 1801649 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0414 11:09:00.288511 1801649 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-397992 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-397992 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-b8vph" [ca8cb83c-927a-4638-9563-4f9b44256480] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-b8vph" [ca8cb83c-927a-4638-9563-4f9b44256480] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004537277s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:32071
functional_test.go:1692: http://192.168.49.2:32071: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-b8vph

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32071
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh -n functional-397992 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 cp functional-397992:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd954204075/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh -n functional-397992 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh -n functional-397992 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/1763595/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "sudo cat /etc/test/nested/copy/1763595/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/1763595.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "sudo cat /etc/ssl/certs/1763595.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/1763595.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "sudo cat /usr/share/ca-certificates/1763595.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/17635952.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "sudo cat /etc/ssl/certs/17635952.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/17635952.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "sudo cat /usr/share/ca-certificates/17635952.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-397992 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397992 ssh "sudo systemctl is-active docker": exit status 1 (304.593537ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397992 ssh "sudo systemctl is-active containerd": exit status 1 (290.644716ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397992 /tmp/TestFunctionalparallelMountCmdany-port2086900732/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744628928379628320" to /tmp/TestFunctionalparallelMountCmdany-port2086900732/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744628928379628320" to /tmp/TestFunctionalparallelMountCmdany-port2086900732/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744628928379628320" to /tmp/TestFunctionalparallelMountCmdany-port2086900732/001/test-1744628928379628320
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397992 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.300568ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 11:08:48.741346 1763595 retry.go:31] will retry after 557.695998ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 14 11:08 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 14 11:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 14 11:08 test-1744628928379628320
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh cat /mount-9p/test-1744628928379628320
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-397992 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2a80e838-5b6d-4431-b93d-90a10f1f860a] Pending
helpers_test.go:344: "busybox-mount" [2a80e838-5b6d-4431-b93d-90a10f1f860a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2a80e838-5b6d-4431-b93d-90a10f1f860a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2a80e838-5b6d-4431-b93d-90a10f1f860a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003780682s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-397992 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397992 /tmp/TestFunctionalparallelMountCmdany-port2086900732/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-397992 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-397992 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-bhzs7" [36e19f08-65f2-4039-b3fb-79db99729f08] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-bhzs7" [36e19f08-65f2-4039-b3fb-79db99729f08] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003662796s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "426.04636ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "71.271771ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "498.264073ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "59.058651ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397992 /tmp/TestFunctionalparallelMountCmdspecific-port4043136006/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397992 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (263.794066ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 11:08:56.801168 1763595 retry.go:31] will retry after 629.246553ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397992 /tmp/TestFunctionalparallelMountCmdspecific-port4043136006/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397992 ssh "sudo umount -f /mount-9p": exit status 1 (282.041414ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-397992 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397992 /tmp/TestFunctionalparallelMountCmdspecific-port4043136006/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 service list -o json
functional_test.go:1511: Took "560.700958ms" to run "out/minikube-linux-amd64 -p functional-397992 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397992 /tmp/TestFunctionalparallelMountCmdVerifyCleanup250675319/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397992 /tmp/TestFunctionalparallelMountCmdVerifyCleanup250675319/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397992 /tmp/TestFunctionalparallelMountCmdVerifyCleanup250675319/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397992 ssh "findmnt -T" /mount1: exit status 1 (365.16763ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0414 11:08:58.813103 1763595 retry.go:31] will retry after 312.488825ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-397992 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397992 /tmp/TestFunctionalparallelMountCmdVerifyCleanup250675319/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397992 /tmp/TestFunctionalparallelMountCmdVerifyCleanup250675319/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397992 /tmp/TestFunctionalparallelMountCmdVerifyCleanup250675319/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:30852
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:30852
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397992 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-397992
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250214-acbabc1a
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397992 image ls --format short --alsologtostderr:
I0414 11:09:38.183967 1804952 out.go:345] Setting OutFile to fd 1 ...
I0414 11:09:38.184455 1804952 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:09:38.184553 1804952 out.go:358] Setting ErrFile to fd 2...
I0414 11:09:38.184572 1804952 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:09:38.185048 1804952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
I0414 11:09:38.186111 1804952 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:09:38.186220 1804952 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:09:38.186589 1804952 cli_runner.go:164] Run: docker container inspect functional-397992 --format={{.State.Status}}
I0414 11:09:38.205579 1804952 ssh_runner.go:195] Run: systemctl --version
I0414 11:09:38.205642 1804952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-397992
I0414 11:09:38.223666 1804952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/functional-397992/id_rsa Username:docker}
I0414 11:09:38.309134 1804952 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397992 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-397992  | ad1ebf9975b9a | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| docker.io/kindest/kindnetd              | v20250214-acbabc1a | df3849d954c98 | 95.7MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-397992  | 1a88bf4faa2f8 | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397992 image ls --format table --alsologtostderr:
I0414 11:09:41.502060 1805534 out.go:345] Setting OutFile to fd 1 ...
I0414 11:09:41.502352 1805534 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:09:41.502363 1805534 out.go:358] Setting ErrFile to fd 2...
I0414 11:09:41.502368 1805534 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:09:41.502682 1805534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
I0414 11:09:41.503356 1805534 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:09:41.503461 1805534 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:09:41.504714 1805534 cli_runner.go:164] Run: docker container inspect functional-397992 --format={{.State.Status}}
I0414 11:09:41.523053 1805534 ssh_runner.go:195] Run: systemctl --version
I0414 11:09:41.523135 1805534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-397992
I0414 11:09:41.540652 1805534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/functional-397992/id_rsa Username:docker}
I0414 11:09:41.625405 1805534 ssh_runner.go:195] Run: sudo crictl images --output json
E0414 11:10:27.555035 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397992 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ad1ebf9975b9a261a4b58b190077b8da39ba746156e26d275ce46cebd7e522d0","repoDigests":["localhost/minikube-local-cache-test@sha256:59f19dc932ce174a9369a6ce7d15a87030591dc8a902ecc78569422997dbb478"],"repoTags":["localhost/minikube-local-cache-test:functional-3
97992"],"size":"3330"},{"id":"1a88bf4faa2f8076567c5486b5752097be5aa104c9732e398c0478fe07ae1989","repoDigests":["localhost/my-image@sha256:dac3f63636d367432ba02c9e33837727e0dd0b2d70319ee6f51ef7ca53d6011f"],"repoTags":["localhost/my-image:functional-397992"],"size":"1468193"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d","registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"95271321"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae9443
4d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests"
:["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4c
a1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"98055648"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-control
ler-manager:v1.32.2"],"size":"90793286"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"70653254"},{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f","repoDigests":["docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495","docker.io/kindest/kindnetd@sha256:f3108bcefe4c9797
081f9b4405e510eaec07ff17b8224077b3bad839452ebc97"],"repoTags":["docker.io/kindest/kindnetd:v20250214-acbabc1a"],"size":"95703604"},{"id":"1b534239b99d57fc41d3eb443bb26820237af58a2486268c077951c289c25ddb","repoDigests":["docker.io/library/e25a5ff6b6c8c707714958d19c03f503a739290a9ff64850b3d1627714db1895-tmp@sha256:17b29675d544ed207eb8955c2a992ece83e7055a10ce79e48cbfbc39706427bf"],"repoTags":[],"size":"1465612"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397992 image ls --format json --alsologtostderr:
I0414 11:09:41.288770 1805483 out.go:345] Setting OutFile to fd 1 ...
I0414 11:09:41.289390 1805483 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:09:41.289411 1805483 out.go:358] Setting ErrFile to fd 2...
I0414 11:09:41.289426 1805483 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:09:41.289977 1805483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
I0414 11:09:41.290918 1805483 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:09:41.291024 1805483 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:09:41.291416 1805483 cli_runner.go:164] Run: docker container inspect functional-397992 --format={{.State.Status}}
I0414 11:09:41.310054 1805483 ssh_runner.go:195] Run: systemctl --version
I0414 11:09:41.310104 1805483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-397992
I0414 11:09:41.327453 1805483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/functional-397992/id_rsa Username:docker}
I0414 11:09:41.413239 1805483 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397992 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: ad1ebf9975b9a261a4b58b190077b8da39ba746156e26d275ce46cebd7e522d0
repoDigests:
- localhost/minikube-local-cache-test@sha256:59f19dc932ce174a9369a6ce7d15a87030591dc8a902ecc78569422997dbb478
repoTags:
- localhost/minikube-local-cache-test:functional-397992
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f
repoDigests:
- docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495
- docker.io/kindest/kindnetd@sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97
repoTags:
- docker.io/kindest/kindnetd:v20250214-acbabc1a
size: "95703604"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397992 image ls --format yaml --alsologtostderr:
I0414 11:09:38.401347 1805002 out.go:345] Setting OutFile to fd 1 ...
I0414 11:09:38.401499 1805002 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:09:38.401510 1805002 out.go:358] Setting ErrFile to fd 2...
I0414 11:09:38.401514 1805002 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:09:38.401715 1805002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
I0414 11:09:38.403673 1805002 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:09:38.403816 1805002 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:09:38.404217 1805002 cli_runner.go:164] Run: docker container inspect functional-397992 --format={{.State.Status}}
I0414 11:09:38.423155 1805002 ssh_runner.go:195] Run: systemctl --version
I0414 11:09:38.423238 1805002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-397992
I0414 11:09:38.441883 1805002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/functional-397992/id_rsa Username:docker}
I0414 11:09:38.529156 1805002 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397992 ssh pgrep buildkitd: exit status 1 (242.640915ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 image build -t localhost/my-image:functional-397992 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-397992 image build -t localhost/my-image:functional-397992 testdata/build --alsologtostderr: (2.210113026s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397992 image build -t localhost/my-image:functional-397992 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1b534239b99
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-397992
--> 1a88bf4faa2
Successfully tagged localhost/my-image:functional-397992
1a88bf4faa2f8076567c5486b5752097be5aa104c9732e398c0478fe07ae1989
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397992 image build -t localhost/my-image:functional-397992 testdata/build --alsologtostderr:
I0414 11:09:38.861988 1805147 out.go:345] Setting OutFile to fd 1 ...
I0414 11:09:38.862250 1805147 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:09:38.862263 1805147 out.go:358] Setting ErrFile to fd 2...
I0414 11:09:38.862267 1805147 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:09:38.862460 1805147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
I0414 11:09:38.863133 1805147 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:09:38.863815 1805147 config.go:182] Loaded profile config "functional-397992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 11:09:38.864237 1805147 cli_runner.go:164] Run: docker container inspect functional-397992 --format={{.State.Status}}
I0414 11:09:38.884645 1805147 ssh_runner.go:195] Run: systemctl --version
I0414 11:09:38.884706 1805147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-397992
I0414 11:09:38.903496 1805147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/functional-397992/id_rsa Username:docker}
I0414 11:09:38.989335 1805147 build_images.go:161] Building image from path: /tmp/build.237265743.tar
I0414 11:09:38.989406 1805147 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0414 11:09:38.999098 1805147 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.237265743.tar
I0414 11:09:39.002902 1805147 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.237265743.tar: stat -c "%s %y" /var/lib/minikube/build/build.237265743.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.237265743.tar': No such file or directory
I0414 11:09:39.002945 1805147 ssh_runner.go:362] scp /tmp/build.237265743.tar --> /var/lib/minikube/build/build.237265743.tar (3072 bytes)
I0414 11:09:39.027278 1805147 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.237265743
I0414 11:09:39.036202 1805147 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.237265743 -xf /var/lib/minikube/build/build.237265743.tar
I0414 11:09:39.046382 1805147 crio.go:315] Building image: /var/lib/minikube/build/build.237265743
I0414 11:09:39.046450 1805147 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-397992 /var/lib/minikube/build/build.237265743 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0414 11:09:40.997693 1805147 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-397992 /var/lib/minikube/build/build.237265743 --cgroup-manager=cgroupfs: (1.951222442s)
I0414 11:09:40.997762 1805147 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.237265743
I0414 11:09:41.007349 1805147 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.237265743.tar
I0414 11:09:41.015608 1805147 build_images.go:217] Built localhost/my-image:functional-397992 from /tmp/build.237265743.tar
I0414 11:09:41.015638 1805147 build_images.go:133] succeeded building to: functional-397992
I0414 11:09:41.015642 1805147 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 image rm kicbase/echo-server:functional-397992 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-397992 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-397992 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-397992 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-397992 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-397992 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1803991: os: process already finished
helpers_test.go:508: unable to kill pid 1803775: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-397992 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-397992 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
E0414 11:17:43.691636 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-397992
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-397992
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-397992
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (103.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-956330 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-956330 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m43.071956201s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (103.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-956330 -- rollout status deployment/busybox: (3.432359229s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-dklbh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-ffchz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-wfkpc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-dklbh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-ffchz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-wfkpc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-dklbh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-ffchz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-wfkpc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-dklbh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-dklbh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-ffchz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-ffchz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-wfkpc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-956330 -- exec busybox-58667487b6-wfkpc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-956330 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-956330 -v=7 --alsologtostderr: (34.699962039s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-956330 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp testdata/cp-test.txt ha-956330:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1490670771/001/cp-test_ha-956330.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330:/home/docker/cp-test.txt ha-956330-m02:/home/docker/cp-test_ha-956330_ha-956330-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m02 "sudo cat /home/docker/cp-test_ha-956330_ha-956330-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330:/home/docker/cp-test.txt ha-956330-m03:/home/docker/cp-test_ha-956330_ha-956330-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m03 "sudo cat /home/docker/cp-test_ha-956330_ha-956330-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330:/home/docker/cp-test.txt ha-956330-m04:/home/docker/cp-test_ha-956330_ha-956330-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m04 "sudo cat /home/docker/cp-test_ha-956330_ha-956330-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp testdata/cp-test.txt ha-956330-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1490670771/001/cp-test_ha-956330-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330-m02:/home/docker/cp-test.txt ha-956330:/home/docker/cp-test_ha-956330-m02_ha-956330.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330 "sudo cat /home/docker/cp-test_ha-956330-m02_ha-956330.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330-m02:/home/docker/cp-test.txt ha-956330-m03:/home/docker/cp-test_ha-956330-m02_ha-956330-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m03 "sudo cat /home/docker/cp-test_ha-956330-m02_ha-956330-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330-m02:/home/docker/cp-test.txt ha-956330-m04:/home/docker/cp-test_ha-956330-m02_ha-956330-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m04 "sudo cat /home/docker/cp-test_ha-956330-m02_ha-956330-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp testdata/cp-test.txt ha-956330-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1490670771/001/cp-test_ha-956330-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330-m03:/home/docker/cp-test.txt ha-956330:/home/docker/cp-test_ha-956330-m03_ha-956330.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330 "sudo cat /home/docker/cp-test_ha-956330-m03_ha-956330.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330-m03:/home/docker/cp-test.txt ha-956330-m02:/home/docker/cp-test_ha-956330-m03_ha-956330-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m02 "sudo cat /home/docker/cp-test_ha-956330-m03_ha-956330-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330-m03:/home/docker/cp-test.txt ha-956330-m04:/home/docker/cp-test_ha-956330-m03_ha-956330-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m04 "sudo cat /home/docker/cp-test_ha-956330-m03_ha-956330-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp testdata/cp-test.txt ha-956330-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1490670771/001/cp-test_ha-956330-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330-m04:/home/docker/cp-test.txt ha-956330:/home/docker/cp-test_ha-956330-m04_ha-956330.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330 "sudo cat /home/docker/cp-test_ha-956330-m04_ha-956330.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330-m04:/home/docker/cp-test.txt ha-956330-m02:/home/docker/cp-test_ha-956330-m04_ha-956330-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m02 "sudo cat /home/docker/cp-test_ha-956330-m04_ha-956330-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 cp ha-956330-m04:/home/docker/cp-test.txt ha-956330-m03:/home/docker/cp-test_ha-956330-m04_ha-956330-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 ssh -n ha-956330-m03 "sudo cat /home/docker/cp-test_ha-956330-m04_ha-956330-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-956330 node stop m02 -v=7 --alsologtostderr: (11.905553242s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-956330 status -v=7 --alsologtostderr: exit status 7 (670.822517ms)

                                                
                                                
-- stdout --
	ha-956330
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-956330-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-956330-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-956330-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:21:51.177340 1830941 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:21:51.177648 1830941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:21:51.177657 1830941 out.go:358] Setting ErrFile to fd 2...
	I0414 11:21:51.177662 1830941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:21:51.177917 1830941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
	I0414 11:21:51.178096 1830941 out.go:352] Setting JSON to false
	I0414 11:21:51.178132 1830941 mustload.go:65] Loading cluster: ha-956330
	I0414 11:21:51.178193 1830941 notify.go:220] Checking for updates...
	I0414 11:21:51.178528 1830941 config.go:182] Loaded profile config "ha-956330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:21:51.178552 1830941 status.go:174] checking status of ha-956330 ...
	I0414 11:21:51.179010 1830941 cli_runner.go:164] Run: docker container inspect ha-956330 --format={{.State.Status}}
	I0414 11:21:51.199148 1830941 status.go:371] ha-956330 host status = "Running" (err=<nil>)
	I0414 11:21:51.199174 1830941 host.go:66] Checking if "ha-956330" exists ...
	I0414 11:21:51.199448 1830941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-956330
	I0414 11:21:51.219160 1830941 host.go:66] Checking if "ha-956330" exists ...
	I0414 11:21:51.219452 1830941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 11:21:51.219506 1830941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-956330
	I0414 11:21:51.238152 1830941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/ha-956330/id_rsa Username:docker}
	I0414 11:21:51.322165 1830941 ssh_runner.go:195] Run: systemctl --version
	I0414 11:21:51.326286 1830941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 11:21:51.337590 1830941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:21:51.387630 1830941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-14 11:21:51.37847924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:21:51.388168 1830941 kubeconfig.go:125] found "ha-956330" server: "https://192.168.49.254:8443"
	I0414 11:21:51.388201 1830941 api_server.go:166] Checking apiserver status ...
	I0414 11:21:51.388244 1830941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:21:51.399260 1830941 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1550/cgroup
	I0414 11:21:51.408521 1830941 api_server.go:182] apiserver freezer: "11:freezer:/docker/17a93b1c86d485abebf0d32e3d18a2b7bf23039b7f3ae0e196ece6f3d0e660db/crio/crio-86d51888db6744f6eb3f0fd9682fdceff82cd04efb34959a3c0110422a62cadd"
	I0414 11:21:51.408590 1830941 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/17a93b1c86d485abebf0d32e3d18a2b7bf23039b7f3ae0e196ece6f3d0e660db/crio/crio-86d51888db6744f6eb3f0fd9682fdceff82cd04efb34959a3c0110422a62cadd/freezer.state
	I0414 11:21:51.416887 1830941 api_server.go:204] freezer state: "THAWED"
	I0414 11:21:51.416917 1830941 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0414 11:21:51.420485 1830941 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0414 11:21:51.420515 1830941 status.go:463] ha-956330 apiserver status = Running (err=<nil>)
	I0414 11:21:51.420527 1830941 status.go:176] ha-956330 status: &{Name:ha-956330 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:21:51.420552 1830941 status.go:174] checking status of ha-956330-m02 ...
	I0414 11:21:51.420876 1830941 cli_runner.go:164] Run: docker container inspect ha-956330-m02 --format={{.State.Status}}
	I0414 11:21:51.439089 1830941 status.go:371] ha-956330-m02 host status = "Stopped" (err=<nil>)
	I0414 11:21:51.439111 1830941 status.go:384] host is not running, skipping remaining checks
	I0414 11:21:51.439117 1830941 status.go:176] ha-956330-m02 status: &{Name:ha-956330-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:21:51.439154 1830941 status.go:174] checking status of ha-956330-m03 ...
	I0414 11:21:51.439471 1830941 cli_runner.go:164] Run: docker container inspect ha-956330-m03 --format={{.State.Status}}
	I0414 11:21:51.458868 1830941 status.go:371] ha-956330-m03 host status = "Running" (err=<nil>)
	I0414 11:21:51.458913 1830941 host.go:66] Checking if "ha-956330-m03" exists ...
	I0414 11:21:51.459243 1830941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-956330-m03
	I0414 11:21:51.479109 1830941 host.go:66] Checking if "ha-956330-m03" exists ...
	I0414 11:21:51.479384 1830941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 11:21:51.479433 1830941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-956330-m03
	I0414 11:21:51.498428 1830941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/ha-956330-m03/id_rsa Username:docker}
	I0414 11:21:51.585938 1830941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 11:21:51.598832 1830941 kubeconfig.go:125] found "ha-956330" server: "https://192.168.49.254:8443"
	I0414 11:21:51.598869 1830941 api_server.go:166] Checking apiserver status ...
	I0414 11:21:51.598917 1830941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:21:51.611616 1830941 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1426/cgroup
	I0414 11:21:51.623930 1830941 api_server.go:182] apiserver freezer: "11:freezer:/docker/f4126d6f1a438050cf2a21fbed582aa396608d67d5773ad8fd18245c2f198a32/crio/crio-a0eb2ef23f907c57aed06729c0731c3404f70ee2f51ad2bcecdab6be7af39cb5"
	I0414 11:21:51.624023 1830941 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f4126d6f1a438050cf2a21fbed582aa396608d67d5773ad8fd18245c2f198a32/crio/crio-a0eb2ef23f907c57aed06729c0731c3404f70ee2f51ad2bcecdab6be7af39cb5/freezer.state
	I0414 11:21:51.632924 1830941 api_server.go:204] freezer state: "THAWED"
	I0414 11:21:51.632954 1830941 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0414 11:21:51.636785 1830941 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0414 11:21:51.636811 1830941 status.go:463] ha-956330-m03 apiserver status = Running (err=<nil>)
	I0414 11:21:51.636820 1830941 status.go:176] ha-956330-m03 status: &{Name:ha-956330-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:21:51.636835 1830941 status.go:174] checking status of ha-956330-m04 ...
	I0414 11:21:51.637084 1830941 cli_runner.go:164] Run: docker container inspect ha-956330-m04 --format={{.State.Status}}
	I0414 11:21:51.656715 1830941 status.go:371] ha-956330-m04 host status = "Running" (err=<nil>)
	I0414 11:21:51.656745 1830941 host.go:66] Checking if "ha-956330-m04" exists ...
	I0414 11:21:51.656988 1830941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-956330-m04
	I0414 11:21:51.676514 1830941 host.go:66] Checking if "ha-956330-m04" exists ...
	I0414 11:21:51.676837 1830941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 11:21:51.676901 1830941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-956330-m04
	I0414 11:21:51.695294 1830941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/ha-956330-m04/id_rsa Username:docker}
	I0414 11:21:51.782078 1830941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 11:21:51.794088 1830941 status.go:176] ha-956330-m04 status: &{Name:ha-956330-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-956330 node start m02 -v=7 --alsologtostderr: (21.633434819s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-956330 status -v=7 --alsologtostderr: (1.053860584s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.053398073s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (173.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-956330 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-956330 -v=7 --alsologtostderr
E0414 11:22:43.692019 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-956330 -v=7 --alsologtostderr: (36.838238038s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-956330 --wait=true -v=7 --alsologtostderr
E0414 11:23:48.629363 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:23:48.635813 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:23:48.648052 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:23:48.670261 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:23:48.712407 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:23:48.794193 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:23:48.956011 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:23:49.277287 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:23:49.919179 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:23:51.201098 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:23:53.762423 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:23:58.884500 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:24:06.759272 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:24:09.126472 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:24:29.607821 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-956330 --wait=true -v=7 --alsologtostderr: (2m16.23944075s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-956330
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (173.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 node delete m03 -v=7 --alsologtostderr
E0414 11:25:10.569715 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-956330 node delete m03 -v=7 --alsologtostderr: (10.692416957s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-956330 stop -v=7 --alsologtostderr: (35.654971409s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-956330 status -v=7 --alsologtostderr: exit status 7 (111.14529ms)

                                                
                                                
-- stdout --
	ha-956330
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-956330-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-956330-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:25:57.362556 1848259 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:25:57.362879 1848259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:25:57.362891 1848259 out.go:358] Setting ErrFile to fd 2...
	I0414 11:25:57.362895 1848259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:25:57.363148 1848259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
	I0414 11:25:57.363363 1848259 out.go:352] Setting JSON to false
	I0414 11:25:57.363406 1848259 mustload.go:65] Loading cluster: ha-956330
	I0414 11:25:57.363479 1848259 notify.go:220] Checking for updates...
	I0414 11:25:57.363900 1848259 config.go:182] Loaded profile config "ha-956330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:25:57.363930 1848259 status.go:174] checking status of ha-956330 ...
	I0414 11:25:57.364466 1848259 cli_runner.go:164] Run: docker container inspect ha-956330 --format={{.State.Status}}
	I0414 11:25:57.385948 1848259 status.go:371] ha-956330 host status = "Stopped" (err=<nil>)
	I0414 11:25:57.385977 1848259 status.go:384] host is not running, skipping remaining checks
	I0414 11:25:57.385983 1848259 status.go:176] ha-956330 status: &{Name:ha-956330 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:25:57.386009 1848259 status.go:174] checking status of ha-956330-m02 ...
	I0414 11:25:57.386276 1848259 cli_runner.go:164] Run: docker container inspect ha-956330-m02 --format={{.State.Status}}
	I0414 11:25:57.404627 1848259 status.go:371] ha-956330-m02 host status = "Stopped" (err=<nil>)
	I0414 11:25:57.404656 1848259 status.go:384] host is not running, skipping remaining checks
	I0414 11:25:57.404664 1848259 status.go:176] ha-956330-m02 status: &{Name:ha-956330-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:25:57.404691 1848259 status.go:174] checking status of ha-956330-m04 ...
	I0414 11:25:57.404966 1848259 cli_runner.go:164] Run: docker container inspect ha-956330-m04 --format={{.State.Status}}
	I0414 11:25:57.423352 1848259 status.go:371] ha-956330-m04 host status = "Stopped" (err=<nil>)
	I0414 11:25:57.423375 1848259 status.go:384] host is not running, skipping remaining checks
	I0414 11:25:57.423383 1848259 status.go:176] ha-956330-m04 status: &{Name:ha-956330-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (100.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-956330 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0414 11:26:32.491992 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-956330 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m39.544273019s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (100.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-956330 --control-plane -v=7 --alsologtostderr
E0414 11:27:43.692609 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-956330 --control-plane -v=7 --alsologtostderr: (37.078269345s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-956330 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-953033 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0414 11:28:48.632600 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-953033 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (40.675351956s)
--- PASS: TestJSONOutput/start/Command (40.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-953033 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-953033 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-953033 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-953033 --output=json --user=testUser: (5.826434854s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-147960 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-147960 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.001794ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fb45285d-a941-4df7-a005-59809e79777c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-147960] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"32097ab2-b7e8-47a9-b54d-d2ddaa096a82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20534"}}
	{"specversion":"1.0","id":"109f742b-a692-48d3-a871-4258d81f8eb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8630acdf-4f69-4f02-90b1-006aa02b55d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig"}}
	{"specversion":"1.0","id":"b909cd36-99a2-4e5b-9f98-6ebe566ae5d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube"}}
	{"specversion":"1.0","id":"6556343d-5ee9-451f-834c-6a8bd9ebd582","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"05ed9340-551a-4725-afc1-73358b9f8f7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e49d221e-b8d0-47b3-9ffa-75417d974845","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-147960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-147960
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-054788 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-054788 --network=: (26.874457101s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-054788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-054788
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-054788: (2.15583308s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.05s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-212172 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-212172 --network=bridge: (21.790082009s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-212172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-212172
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-212172: (1.941485338s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.75s)

                                                
                                    
x
+
TestKicExistingNetwork (24.18s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0414 11:30:09.815017 1763595 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0414 11:30:09.831897 1763595 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0414 11:30:09.831984 1763595 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0414 11:30:09.832018 1763595 cli_runner.go:164] Run: docker network inspect existing-network
W0414 11:30:09.849109 1763595 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0414 11:30:09.849151 1763595 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0414 11:30:09.849166 1763595 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0414 11:30:09.849303 1763595 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0414 11:30:09.868065 1763595 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-18dcb84e4f39 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:38:02:b5:29:f8} reservation:<nil>}
I0414 11:30:09.868529 1763595 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00184d7a0}
I0414 11:30:09.868572 1763595 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0414 11:30:09.868619 1763595 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0414 11:30:09.921014 1763595 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-039444 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-039444 --network=existing-network: (22.056176851s)
helpers_test.go:175: Cleaning up "existing-network-039444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-039444
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-039444: (1.98515824s)
I0414 11:30:33.980749 1763595 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.18s)

                                                
                                    
x
+
TestKicCustomSubnet (27.74s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-767149 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-767149 --subnet=192.168.60.0/24: (25.589574645s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-767149 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-767149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-767149
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-767149: (2.127903337s)
--- PASS: TestKicCustomSubnet (27.74s)

                                                
                                    
x
+
TestKicStaticIP (25.52s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-600158 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-600158 --static-ip=192.168.200.200: (23.335285229s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-600158 ip
helpers_test.go:175: Cleaning up "static-ip-600158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-600158
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-600158: (2.051637471s)
--- PASS: TestKicStaticIP (25.52s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (48.97s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-343407 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-343407 --driver=docker  --container-runtime=crio: (21.647972701s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-361142 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-361142 --driver=docker  --container-runtime=crio: (21.92875191s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-343407
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-361142
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-361142" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-361142
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-361142: (1.89888166s)
helpers_test.go:175: Cleaning up "first-343407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-343407
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-343407: (2.299848862s)
--- PASS: TestMinikubeProfile (48.97s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-715349 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-715349 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.088706129s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-715349 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-732645 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-732645 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.26401457s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-732645 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-715349 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-715349 --alsologtostderr -v=5: (1.644684584s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-732645 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-732645
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-732645: (1.181075665s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-732645
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-732645: (6.117184474s)
--- PASS: TestMountStart/serial/RestartStopped (7.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-732645 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-958537 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0414 11:32:43.691606 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:33:48.629170 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-958537 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m8.63917727s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-958537 -- rollout status deployment/busybox: (4.993474181s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- exec busybox-58667487b6-bhfz2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- exec busybox-58667487b6-x2brn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- exec busybox-58667487b6-bhfz2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- exec busybox-58667487b6-x2brn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- exec busybox-58667487b6-bhfz2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- exec busybox-58667487b6-x2brn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- exec busybox-58667487b6-bhfz2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- exec busybox-58667487b6-bhfz2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- exec busybox-58667487b6-x2brn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-958537 -- exec busybox-58667487b6-x2brn -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-958537 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-958537 -v 3 --alsologtostderr: (28.13808473s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-958537 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 cp testdata/cp-test.txt multinode-958537:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 cp multinode-958537:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1716023643/001/cp-test_multinode-958537.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 cp multinode-958537:/home/docker/cp-test.txt multinode-958537-m02:/home/docker/cp-test_multinode-958537_multinode-958537-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537-m02 "sudo cat /home/docker/cp-test_multinode-958537_multinode-958537-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 cp multinode-958537:/home/docker/cp-test.txt multinode-958537-m03:/home/docker/cp-test_multinode-958537_multinode-958537-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537-m03 "sudo cat /home/docker/cp-test_multinode-958537_multinode-958537-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 cp testdata/cp-test.txt multinode-958537-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 cp multinode-958537-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1716023643/001/cp-test_multinode-958537-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 cp multinode-958537-m02:/home/docker/cp-test.txt multinode-958537:/home/docker/cp-test_multinode-958537-m02_multinode-958537.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537 "sudo cat /home/docker/cp-test_multinode-958537-m02_multinode-958537.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 cp multinode-958537-m02:/home/docker/cp-test.txt multinode-958537-m03:/home/docker/cp-test_multinode-958537-m02_multinode-958537-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537-m03 "sudo cat /home/docker/cp-test_multinode-958537-m02_multinode-958537-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 cp testdata/cp-test.txt multinode-958537-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 cp multinode-958537-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1716023643/001/cp-test_multinode-958537-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 cp multinode-958537-m03:/home/docker/cp-test.txt multinode-958537:/home/docker/cp-test_multinode-958537-m03_multinode-958537.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537 "sudo cat /home/docker/cp-test_multinode-958537-m03_multinode-958537.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 cp multinode-958537-m03:/home/docker/cp-test.txt multinode-958537-m02:/home/docker/cp-test_multinode-958537-m03_multinode-958537-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 ssh -n multinode-958537-m02 "sudo cat /home/docker/cp-test_multinode-958537-m03_multinode-958537-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-958537 node stop m03: (1.185210953s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-958537 status: exit status 7 (467.48481ms)

                                                
                                                
-- stdout --
	multinode-958537
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-958537-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-958537-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-958537 status --alsologtostderr: exit status 7 (476.78542ms)

                                                
                                                
-- stdout --
	multinode-958537
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-958537-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-958537-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:34:39.218936 1913954 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:34:39.219242 1913954 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:34:39.219255 1913954 out.go:358] Setting ErrFile to fd 2...
	I0414 11:34:39.219263 1913954 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:34:39.219463 1913954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
	I0414 11:34:39.219684 1913954 out.go:352] Setting JSON to false
	I0414 11:34:39.219729 1913954 mustload.go:65] Loading cluster: multinode-958537
	I0414 11:34:39.219779 1913954 notify.go:220] Checking for updates...
	I0414 11:34:39.220218 1913954 config.go:182] Loaded profile config "multinode-958537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:34:39.220254 1913954 status.go:174] checking status of multinode-958537 ...
	I0414 11:34:39.220787 1913954 cli_runner.go:164] Run: docker container inspect multinode-958537 --format={{.State.Status}}
	I0414 11:34:39.241599 1913954 status.go:371] multinode-958537 host status = "Running" (err=<nil>)
	I0414 11:34:39.241645 1913954 host.go:66] Checking if "multinode-958537" exists ...
	I0414 11:34:39.241956 1913954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-958537
	I0414 11:34:39.260043 1913954 host.go:66] Checking if "multinode-958537" exists ...
	I0414 11:34:39.260322 1913954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 11:34:39.260421 1913954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-958537
	I0414 11:34:39.278204 1913954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/multinode-958537/id_rsa Username:docker}
	I0414 11:34:39.365900 1913954 ssh_runner.go:195] Run: systemctl --version
	I0414 11:34:39.370168 1913954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 11:34:39.381210 1913954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:34:39.435301 1913954 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:63 SystemTime:2025-04-14 11:34:39.426208199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:34:39.435865 1913954 kubeconfig.go:125] found "multinode-958537" server: "https://192.168.67.2:8443"
	I0414 11:34:39.435903 1913954 api_server.go:166] Checking apiserver status ...
	I0414 11:34:39.435937 1913954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0414 11:34:39.447117 1913954 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1504/cgroup
	I0414 11:34:39.456453 1913954 api_server.go:182] apiserver freezer: "11:freezer:/docker/2f0effc0c4d392cc26570521bf79e1c71a0d7be87f5f517cf6b851ce1b934e64/crio/crio-79d778bfb49829d99e38e349e4e91e4eb6e78cf26061c70445f4cf0b09275a76"
	I0414 11:34:39.456527 1913954 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2f0effc0c4d392cc26570521bf79e1c71a0d7be87f5f517cf6b851ce1b934e64/crio/crio-79d778bfb49829d99e38e349e4e91e4eb6e78cf26061c70445f4cf0b09275a76/freezer.state
	I0414 11:34:39.465124 1913954 api_server.go:204] freezer state: "THAWED"
	I0414 11:34:39.465160 1913954 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0414 11:34:39.468961 1913954 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0414 11:34:39.468984 1913954 status.go:463] multinode-958537 apiserver status = Running (err=<nil>)
	I0414 11:34:39.468995 1913954 status.go:176] multinode-958537 status: &{Name:multinode-958537 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:34:39.469011 1913954 status.go:174] checking status of multinode-958537-m02 ...
	I0414 11:34:39.469258 1913954 cli_runner.go:164] Run: docker container inspect multinode-958537-m02 --format={{.State.Status}}
	I0414 11:34:39.487280 1913954 status.go:371] multinode-958537-m02 host status = "Running" (err=<nil>)
	I0414 11:34:39.487309 1913954 host.go:66] Checking if "multinode-958537-m02" exists ...
	I0414 11:34:39.487654 1913954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-958537-m02
	I0414 11:34:39.505057 1913954 host.go:66] Checking if "multinode-958537-m02" exists ...
	I0414 11:34:39.505335 1913954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0414 11:34:39.505373 1913954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-958537-m02
	I0414 11:34:39.523085 1913954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/20534-1756784/.minikube/machines/multinode-958537-m02/id_rsa Username:docker}
	I0414 11:34:39.613651 1913954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0414 11:34:39.625143 1913954 status.go:176] multinode-958537-m02 status: &{Name:multinode-958537-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:34:39.625184 1913954 status.go:174] checking status of multinode-958537-m03 ...
	I0414 11:34:39.625504 1913954 cli_runner.go:164] Run: docker container inspect multinode-958537-m03 --format={{.State.Status}}
	I0414 11:34:39.644349 1913954 status.go:371] multinode-958537-m03 host status = "Stopped" (err=<nil>)
	I0414 11:34:39.644402 1913954 status.go:384] host is not running, skipping remaining checks
	I0414 11:34:39.644411 1913954 status.go:176] multinode-958537-m03 status: &{Name:multinode-958537-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-958537 node start m03 -v=7 --alsologtostderr: (8.396849786s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (87.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-958537
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-958537
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-958537: (24.761152452s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-958537 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-958537 --wait=true -v=8 --alsologtostderr: (1m2.912157382s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-958537
--- PASS: TestMultiNode/serial/RestartKeepsNodes (87.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-958537 node delete m03: (4.430207388s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-958537 stop: (23.627863712s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-958537 status: exit status 7 (90.80812ms)

                                                
                                                
-- stdout --
	multinode-958537
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-958537-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-958537 status --alsologtostderr: exit status 7 (91.112856ms)

                                                
                                                
-- stdout --
	multinode-958537
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-958537-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:36:45.264512 1923307 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:36:45.264800 1923307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:36:45.264809 1923307 out.go:358] Setting ErrFile to fd 2...
	I0414 11:36:45.264813 1923307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:36:45.265667 1923307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
	I0414 11:36:45.266008 1923307 out.go:352] Setting JSON to false
	I0414 11:36:45.266135 1923307 mustload.go:65] Loading cluster: multinode-958537
	I0414 11:36:45.266197 1923307 notify.go:220] Checking for updates...
	I0414 11:36:45.266959 1923307 config.go:182] Loaded profile config "multinode-958537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:36:45.266998 1923307 status.go:174] checking status of multinode-958537 ...
	I0414 11:36:45.267551 1923307 cli_runner.go:164] Run: docker container inspect multinode-958537 --format={{.State.Status}}
	I0414 11:36:45.287015 1923307 status.go:371] multinode-958537 host status = "Stopped" (err=<nil>)
	I0414 11:36:45.287051 1923307 status.go:384] host is not running, skipping remaining checks
	I0414 11:36:45.287060 1923307 status.go:176] multinode-958537 status: &{Name:multinode-958537 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0414 11:36:45.287095 1923307 status.go:174] checking status of multinode-958537-m02 ...
	I0414 11:36:45.287346 1923307 cli_runner.go:164] Run: docker container inspect multinode-958537-m02 --format={{.State.Status}}
	I0414 11:36:45.305820 1923307 status.go:371] multinode-958537-m02 host status = "Stopped" (err=<nil>)
	I0414 11:36:45.305863 1923307 status.go:384] host is not running, skipping remaining checks
	I0414 11:36:45.305870 1923307 status.go:176] multinode-958537-m02 status: &{Name:multinode-958537-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-958537 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-958537 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (45.063047078s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-958537 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (27.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-958537
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-958537-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-958537-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.789874ms)

                                                
                                                
-- stdout --
	* [multinode-958537-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-958537-m02' is duplicated with machine name 'multinode-958537-m02' in profile 'multinode-958537'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-958537-m03 --driver=docker  --container-runtime=crio
E0414 11:37:43.691993 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-958537-m03 --driver=docker  --container-runtime=crio: (25.193228618s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-958537
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-958537: exit status 80 (294.354469ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-958537 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-958537-m03 already exists in multinode-958537-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_8.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-958537-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-958537-m03: (1.894121861s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (27.51s)

                                                
                                    
x
+
TestPreload (104.1s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-369775 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0414 11:38:48.629668 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-369775 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m16.508574224s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-369775 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-369775 image pull gcr.io/k8s-minikube/busybox: (2.279677652s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-369775
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-369775: (5.786667946s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-369775 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-369775 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (16.981802691s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-369775 image list
helpers_test.go:175: Cleaning up "test-preload-369775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-369775
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-369775: (2.31883929s)
--- PASS: TestPreload (104.10s)

                                                
                                    
x
+
TestScheduledStopUnix (98.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-476116 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-476116 --memory=2048 --driver=docker  --container-runtime=crio: (21.906268778s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-476116 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-476116 -n scheduled-stop-476116
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-476116 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0414 11:40:08.985181 1763595 retry.go:31] will retry after 87.784µs: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
I0414 11:40:08.986368 1763595 retry.go:31] will retry after 175.693µs: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
I0414 11:40:08.987584 1763595 retry.go:31] will retry after 193.781µs: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
I0414 11:40:08.988766 1763595 retry.go:31] will retry after 341.788µs: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
I0414 11:40:08.989942 1763595 retry.go:31] will retry after 305.954µs: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
I0414 11:40:08.991082 1763595 retry.go:31] will retry after 1.134543ms: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
I0414 11:40:08.993331 1763595 retry.go:31] will retry after 1.335669ms: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
I0414 11:40:08.995569 1763595 retry.go:31] will retry after 2.294543ms: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
I0414 11:40:08.998837 1763595 retry.go:31] will retry after 1.627787ms: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
I0414 11:40:09.001091 1763595 retry.go:31] will retry after 3.240495ms: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
I0414 11:40:09.005335 1763595 retry.go:31] will retry after 3.246493ms: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
I0414 11:40:09.009623 1763595 retry.go:31] will retry after 9.471951ms: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
I0414 11:40:09.019942 1763595 retry.go:31] will retry after 16.616734ms: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
I0414 11:40:09.037324 1763595 retry.go:31] will retry after 19.888343ms: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
I0414 11:40:09.057631 1763595 retry.go:31] will retry after 24.57161ms: open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/scheduled-stop-476116/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-476116 --cancel-scheduled
E0414 11:40:11.696580 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-476116 -n scheduled-stop-476116
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-476116
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-476116 --schedule 15s
E0414 11:40:46.760686 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-476116
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-476116: exit status 7 (72.347903ms)

                                                
                                                
-- stdout --
	scheduled-stop-476116
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-476116 -n scheduled-stop-476116
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-476116 -n scheduled-stop-476116: exit status 7 (69.882139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-476116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-476116
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-476116: (4.884896501s)
--- PASS: TestScheduledStopUnix (98.15s)

                                                
                                    
x
+
TestInsufficientStorage (13.09s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-398239 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-398239 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.70259097s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9db21ded-954b-4217-af74-a0d39ec91f6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-398239] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"de96451c-451d-426c-a90f-37f6a5396749","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20534"}}
	{"specversion":"1.0","id":"4495b8c5-de9c-4d26-8943-01bc3d32e4e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"459566da-3810-45b8-b194-516dfd2bc03b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig"}}
	{"specversion":"1.0","id":"5405b9b6-b4fa-4e10-982a-b7770b726f73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube"}}
	{"specversion":"1.0","id":"e7b96cdf-d1cb-4c7e-8157-25639a07842f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3c8a388a-2c23-4b70-b290-be91d1ab2274","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"af002bd1-1447-4323-ace8-60160e7650cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c879f264-0fb0-4d44-92c2-f72c11712393","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"022d1e5d-cd6f-45d3-a4cd-5afc2edff79f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"49801f81-c0e5-4c85-b223-73df83772fc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"edfa7054-08f0-46a8-a80f-5c677d99bf87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-398239\" primary control-plane node in \"insufficient-storage-398239\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"603099f4-45dc-4f2d-b10f-b2b8b30e8cec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1744107393-20604 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7a44d6a3-7cd3-4719-b23c-47cca306ebea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b91d281-951f-4c40-b80e-fa8e2125aaf7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-398239 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-398239 --output=json --layout=cluster: exit status 7 (269.817299ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-398239","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-398239","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 11:41:35.767085 1945655 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-398239" does not appear in /home/jenkins/minikube-integration/20534-1756784/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-398239 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-398239 --output=json --layout=cluster: exit status 7 (267.934207ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-398239","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-398239","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0414 11:41:36.035461 1945754 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-398239" does not appear in /home/jenkins/minikube-integration/20534-1756784/kubeconfig
	E0414 11:41:36.045859 1945754 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/insufficient-storage-398239/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-398239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-398239
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-398239: (1.849165335s)
--- PASS: TestInsufficientStorage (13.09s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (60.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2669380607 start -p running-upgrade-647600 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2669380607 start -p running-upgrade-647600 --memory=2200 --vm-driver=docker  --container-runtime=crio: (25.568320185s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-647600 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-647600 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.785307896s)
helpers_test.go:175: Cleaning up "running-upgrade-647600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-647600
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-647600: (2.752560369s)
--- PASS: TestRunningBinaryUpgrade (60.52s)

                                                
                                    
x
+
TestKubernetesUpgrade (353.63s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-305175 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-305175 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.03925051s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-305175
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-305175: (2.870795892s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-305175 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-305175 status --format={{.Host}}: exit status 7 (90.588262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-305175 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0414 11:43:48.629629 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-305175 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.192057468s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-305175 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-305175 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-305175 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (82.016993ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-305175] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-305175
	    minikube start -p kubernetes-upgrade-305175 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3051752 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-305175 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-305175 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-305175 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.389297091s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-305175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-305175
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-305175: (3.902195818s)
--- PASS: TestKubernetesUpgrade (353.63s)

                                                
                                    
x
+
TestMissingContainerUpgrade (142.27s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2422899130 start -p missing-upgrade-150971 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2422899130 start -p missing-upgrade-150971 --memory=2200 --driver=docker  --container-runtime=crio: (1m11.903806427s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-150971
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-150971: (10.428592164s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-150971
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-150971 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-150971 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.695587831s)
helpers_test.go:175: Cleaning up "missing-upgrade-150971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-150971
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-150971: (4.672780929s)
--- PASS: TestMissingContainerUpgrade (142.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-134783 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-134783 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (94.478311ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-134783] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-134783 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-134783 --driver=docker  --container-runtime=crio: (34.476553114s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-134783 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-134783 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-134783 --no-kubernetes --driver=docker  --container-runtime=crio: (17.567186135s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-134783 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-134783 status -o json: exit status 2 (310.062496ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-134783","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-134783
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-134783: (1.984881003s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-134783 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-134783 --no-kubernetes --driver=docker  --container-runtime=crio: (5.480857059s)
--- PASS: TestNoKubernetes/serial/Start (5.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-218732 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-218732 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (183.05074ms)

                                                
                                                
-- stdout --
	* [false-218732] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20534
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0414 11:42:38.067757 1965287 out.go:345] Setting OutFile to fd 1 ...
	I0414 11:42:38.067900 1965287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:42:38.067921 1965287 out.go:358] Setting ErrFile to fd 2...
	I0414 11:42:38.067928 1965287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0414 11:42:38.068655 1965287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-1756784/.minikube/bin
	I0414 11:42:38.069383 1965287 out.go:352] Setting JSON to false
	I0414 11:42:38.071084 1965287 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":159906,"bootTime":1744471052,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0414 11:42:38.071213 1965287 start.go:139] virtualization: kvm guest
	I0414 11:42:38.072923 1965287 out.go:177] * [false-218732] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0414 11:42:38.074686 1965287 out.go:177]   - MINIKUBE_LOCATION=20534
	I0414 11:42:38.074735 1965287 notify.go:220] Checking for updates...
	I0414 11:42:38.077167 1965287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0414 11:42:38.078831 1965287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20534-1756784/kubeconfig
	I0414 11:42:38.080227 1965287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-1756784/.minikube
	I0414 11:42:38.081594 1965287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0414 11:42:38.083189 1965287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0414 11:42:38.085679 1965287 config.go:182] Loaded profile config "NoKubernetes-134783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0414 11:42:38.085826 1965287 config.go:182] Loaded profile config "cert-expiration-163378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:42:38.085955 1965287 config.go:182] Loaded profile config "cert-options-241293": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0414 11:42:38.086069 1965287 driver.go:394] Setting default libvirt URI to qemu:///system
	I0414 11:42:38.114957 1965287 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0414 11:42:38.115048 1965287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0414 11:42:38.178578 1965287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:74 SystemTime:2025-04-14 11:42:38.167713414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647996928 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0414 11:42:38.178692 1965287 docker.go:318] overlay module found
	I0414 11:42:38.180740 1965287 out.go:177] * Using the docker driver based on user configuration
	I0414 11:42:38.182321 1965287 start.go:297] selected driver: docker
	I0414 11:42:38.182342 1965287 start.go:901] validating driver "docker" against <nil>
	I0414 11:42:38.182356 1965287 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0414 11:42:38.185335 1965287 out.go:201] 
	W0414 11:42:38.187089 1965287 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0414 11:42:38.188569 1965287 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-218732 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-218732

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-218732

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-218732

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-218732

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-218732

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-218732

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-218732

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-218732

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-218732

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-218732

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-218732

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-218732" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-218732" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20534-1756784/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 11:42:10 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-163378
contexts:
- context:
cluster: cert-expiration-163378
extensions:
- extension:
last-update: Mon, 14 Apr 2025 11:42:10 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-163378
name: cert-expiration-163378
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-163378
user:
client-certificate: /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/cert-expiration-163378/client.crt
client-key: /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/cert-expiration-163378/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-218732

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-218732"

                                                
                                                
----------------------- debugLogs end: false-218732 [took: 3.64072846s] --------------------------------
helpers_test.go:175: Cleaning up "false-218732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-218732
--- PASS: TestNetworkPlugins/group/false (4.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-134783 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-134783 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.11359ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.027057876s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-134783
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-134783: (1.253827815s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-134783 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-134783 --driver=docker  --container-runtime=crio: (8.805365637s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
I0414 11:42:46.173789 1763595 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 11:42:46.173887 1763595 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0414 11:42:46.207531 1763595 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0414 11:42:46.207563 1763595 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0414 11:42:46.207643 1763595 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0414 11:42:46.207674 1763595 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate959252135/002/docker-machine-driver-kvm2
I0414 11:42:46.232452 1763595 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate959252135/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0005ef6f8 gz:0xc0005ef780 tar:0xc0005ef730 tar.bz2:0xc0005ef740 tar.gz:0xc0005ef750 tar.xz:0xc0005ef760 tar.zst:0xc0005ef770 tbz2:0xc0005ef740 tgz:0xc0005ef750 txz:0xc0005ef760 tzst:0xc0005ef770 xz:0xc0005ef788 zip:0xc0005ef790 zst:0xc0005ef7a0] Getters:map[file:0xc000b28d50 http:0xc000538410 https:0xc000538460] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0414 11:42:46.232550 1763595 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate959252135/002/docker-machine-driver-kvm2
--- PASS: TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (102.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1666975970 start -p stopped-upgrade-687052 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1666975970 start -p stopped-upgrade-687052 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m15.99266076s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1666975970 -p stopped-upgrade-687052 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1666975970 -p stopped-upgrade-687052 stop: (4.734495353s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-687052 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-687052 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.70707547s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (102.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-134783 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-134783 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.462476ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-687052
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestPause/serial/Start (48.67s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-916639 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-916639 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (48.666392875s)
--- PASS: TestPause/serial/Start (48.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-218732 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-218732 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.762712011s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (18.37s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-916639 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-916639 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.351564353s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (18.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (40.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-218732 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-218732 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (40.604175376s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (40.60s)

                                                
                                    
x
+
TestPause/serial/Pause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-916639 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-916639 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-916639 --output=json --layout=cluster: exit status 2 (318.180268ms)

                                                
                                                
-- stdout --
	{"Name":"pause-916639","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-916639","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-916639 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.75s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-916639 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.78s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-916639 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-916639 --alsologtostderr -v=5: (2.779485673s)
--- PASS: TestPause/serial/DeletePaused (2.78s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.85s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-916639
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-916639: exit status 1 (17.834748ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-916639: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (56.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-218732 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-218732 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (56.713529595s)
--- PASS: TestNetworkPlugins/group/calico/Start (56.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-218732 "pgrep -a kubelet"
I0414 11:46:25.006853 1763595 config.go:182] Loaded profile config "auto-218732": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-218732 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-dtrtm" [f93bb1ab-0bf4-41c5-ba4a-d693ee818a64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-dtrtm" [f93bb1ab-0bf4-41c5-ba4a-d693ee818a64] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003558393s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-218732 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-218732 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-218732 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-n6nh7" [e2776e19-41ed-4d9b-9387-6386c8ba39ba] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005741471s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-218732 "pgrep -a kubelet"
I0414 11:46:51.137547 1763595 config.go:182] Loaded profile config "kindnet-218732": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-218732 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4b62x" [5bb6d6cd-4ca0-4216-af63-c53fa6fdafc0] Pending
helpers_test.go:344: "netcat-5d86dc444-4b62x" [5bb6d6cd-4ca0-4216-af63-c53fa6fdafc0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4b62x" [5bb6d6cd-4ca0-4216-af63-c53fa6fdafc0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.009655698s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-218732 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-218732 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (50.397741622s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-218732 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-218732 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-218732 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-m8687" [ad4039a4-13c2-453f-bed6-3ab02983f168] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004851862s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-218732 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-218732 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (38.049331339s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-218732 "pgrep -a kubelet"
I0414 11:47:25.592247 1763595 config.go:182] Loaded profile config "calico-218732": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-218732 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-sg7r8" [91f52091-8bb2-4298-b866-882f88c1621d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-sg7r8" [91f52091-8bb2-4298-b866-882f88c1621d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003899982s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-218732 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-218732 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-218732 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-218732 "pgrep -a kubelet"
I0414 11:47:47.846617 1763595 config.go:182] Loaded profile config "custom-flannel-218732": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-218732 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-jh4zq" [2303fe96-968b-4d4b-9de3-0ade3ef1f161] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-jh4zq" [2303fe96-968b-4d4b-9de3-0ade3ef1f161] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004112987s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-218732 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-218732 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (48.890578345s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-218732 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-218732 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-218732 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-218732 "pgrep -a kubelet"
I0414 11:47:59.929955 1763595 config.go:182] Loaded profile config "enable-default-cni-218732": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-218732 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-l8h9l" [4f646266-6cc1-4e32-8490-85dcafbba165] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-l8h9l" [4f646266-6cc1-4e32-8490-85dcafbba165] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004071523s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-218732 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-218732 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-218732 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-218732 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-218732 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (41.122368458s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (143.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-697125 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-697125 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m23.053574408s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (143.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wln7t" [c60a6fc6-0be8-4c61-9798-2b3782ac847a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00422144s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-685777 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 11:48:48.629550 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-685777 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (55.770405552s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-218732 "pgrep -a kubelet"
I0414 11:48:51.944372 1763595 config.go:182] Loaded profile config "flannel-218732": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-218732 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-tbf2q" [3692143c-8098-4a99-81f3-7380cdecccad] Pending
helpers_test.go:344: "netcat-5d86dc444-tbf2q" [3692143c-8098-4a99-81f3-7380cdecccad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.00536059s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-218732 "pgrep -a kubelet"
I0414 11:49:00.940197 1763595 config.go:182] Loaded profile config "bridge-218732": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-218732 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-56vxw" [f5ffcb0a-039c-4fce-a5aa-f6d708577a5b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-56vxw" [f5ffcb0a-039c-4fce-a5aa-f6d708577a5b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004440049s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-218732 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-218732 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-218732 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-218732 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-218732 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-218732 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E0414 11:53:41.114967 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:41.169453 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:45.652748 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:45.659193 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:45.670772 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:45.692332 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:45.733775 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:45.815278 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:45.976899 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:46.298609 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:46.939898 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:48.221665 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:48.629561 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:50.783754 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:55.905638 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:01.188603 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:01.195059 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:01.206529 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:01.228024 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:01.269497 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:01.350985 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:01.512527 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:01.834259 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:02.476507 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:03.758610 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:06.147357 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:06.320929 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:09.095814 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:09.983055 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:11.443275 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:21.684689 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:22.076768 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:26.628623 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:28.693298 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:54:42.166554 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-391375 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-391375 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (43.205376957s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-836137 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-836137 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (28.063221895s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-685777 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [692f41b0-7b75-4c48-96c1-0ec2eb01611d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [692f41b0-7b75-4c48-96c1-0ec2eb01611d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.007704639s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-685777 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-685777 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-685777 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.020339822s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-685777 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-685777 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-685777 --alsologtostderr -v=3: (12.286356539s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-836137 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-836137 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.215544095s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-836137 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-836137 --alsologtostderr -v=3: (1.245501788s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-685777 -n no-preload-685777
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-685777 -n no-preload-685777: exit status 7 (75.567637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-685777 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (278.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-685777 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-685777 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (4m38.379728381s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-685777 -n no-preload-685777
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (278.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-836137 -n newest-cni-836137
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-836137 -n newest-cni-836137: exit status 7 (89.664394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-836137 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-836137 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-836137 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (13.182996565s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-836137 -n newest-cni-836137
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-391375 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5e20781e-399c-4ecb-9838-6a96a0ca5dc5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5e20781e-399c-4ecb-9838-6a96a0ca5dc5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004175616s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-391375 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-836137 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-836137 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-836137 -n newest-cni-836137
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-836137 -n newest-cni-836137: exit status 2 (308.2901ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-836137 -n newest-cni-836137
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-836137 -n newest-cni-836137: exit status 2 (312.013096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-836137 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-836137 -n newest-cni-836137
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-836137 -n newest-cni-836137
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-391375 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-391375 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.056439331s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-391375 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-391375 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-391375 --alsologtostderr -v=3: (12.035395506s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-593271 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-593271 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (47.67901085s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-391375 -n default-k8s-diff-port-391375
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-391375 -n default-k8s-diff-port-391375: exit status 7 (74.095477ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-391375 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-391375 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-391375 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (4m57.397427346s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-391375 -n default-k8s-diff-port-391375
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-697125 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2d015dce-666b-41a4-a10e-85e463a56f7f] Pending
helpers_test.go:344: "busybox" [2d015dce-666b-41a4-a10e-85e463a56f7f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2d015dce-666b-41a4-a10e-85e463a56f7f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004022637s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-697125 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-697125 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-697125 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-697125 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-697125 --alsologtostderr -v=3: (11.98364144s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-593271 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3b8a2c8c-d9de-4058-a8d8-8041693bf432] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3b8a2c8c-d9de-4058-a8d8-8041693bf432] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003164395s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-593271 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-697125 -n old-k8s-version-697125
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-697125 -n old-k8s-version-697125: exit status 7 (77.99422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-697125 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (124.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-697125 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-697125 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m4.362281837s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-697125 -n old-k8s-version-697125
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (124.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-593271 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-593271 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (14.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-593271 --alsologtostderr -v=3
E0414 11:51:25.236318 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:25.242880 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:25.254393 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:25.275906 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:25.317361 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:25.398921 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:25.560561 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:25.882529 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:26.524571 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:27.806946 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:30.368501 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:35.489987 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-593271 --alsologtostderr -v=3: (14.608374416s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (14.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-593271 -n embed-certs-593271
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-593271 -n embed-certs-593271: exit status 7 (75.754172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-593271 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (301.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-593271 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0414 11:51:44.831231 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:44.837757 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:44.849287 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:44.870784 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:44.912287 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:44.993730 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:45.155044 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:45.476928 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:45.731341 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:46.119987 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:47.402320 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:49.964548 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:51:55.085910 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:05.327639 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:06.212747 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:19.228288 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:19.234759 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:19.246180 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:19.267593 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:19.309043 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:19.390558 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:19.552151 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:19.874475 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:20.516654 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:21.798908 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:24.361218 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:25.809541 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:29.482600 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:39.724954 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:43.692286 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/addons-295301/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:47.174129 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:48.042479 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:48.048943 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:48.060400 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:48.082514 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:48.124285 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:48.206091 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:48.368071 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:48.689814 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:49.331853 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:50.613929 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:53.175635 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:52:58.297850 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:00.138414 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:00.144806 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:00.156253 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:00.177708 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:00.207148 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/calico-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:00.219556 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:00.301025 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:00.462605 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:00.784313 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:01.425678 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:02.707843 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:05.269337 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:06.771737 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:08.539334 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:10.391374 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:53:20.632739 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-593271 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (5m1.197784884s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-593271 -n embed-certs-593271
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (301.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fmhz8" [d1b631e1-a7e3-4fe2-b01c-930ec0a33e87] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003762891s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fmhz8" [d1b631e1-a7e3-4fe2-b01c-930ec0a33e87] Running
E0414 11:53:29.021668 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/custom-flannel-218732/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005755867s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-697125 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-697125 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-697125 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-697125 -n old-k8s-version-697125
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-697125 -n old-k8s-version-697125: exit status 2 (304.129595ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-697125 -n old-k8s-version-697125
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-697125 -n old-k8s-version-697125: exit status 2 (302.786658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-697125 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-697125 -n old-k8s-version-697125
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-697125 -n old-k8s-version-697125
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zpjct" [7d3a0465-9e8a-4a04-887e-56b9a0e9146a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003830653s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zpjct" [7d3a0465-9e8a-4a04-887e-56b9a0e9146a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004334735s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-685777 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-685777 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-685777 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-685777 -n no-preload-685777
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-685777 -n no-preload-685777: exit status 2 (300.659267ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-685777 -n no-preload-685777
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-685777 -n no-preload-685777: exit status 2 (296.491937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-685777 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-685777 -n no-preload-685777
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-685777 -n no-preload-685777
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-t9cfr" [bec674cf-82b3-4e2a-93f6-7ef6aaea7bdb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003985235s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-t9cfr" [bec674cf-82b3-4e2a-93f6-7ef6aaea7bdb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003940949s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-391375 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-391375 image list --format=json
E0414 11:55:43.998780 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/enable-default-cni-218732/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-391375 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-391375 -n default-k8s-diff-port-391375
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-391375 -n default-k8s-diff-port-391375: exit status 2 (294.196699ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-391375 -n default-k8s-diff-port-391375
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-391375 -n default-k8s-diff-port-391375: exit status 2 (298.729225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-391375 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-391375 -n default-k8s-diff-port-391375
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-391375 -n default-k8s-diff-port-391375
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-78rvt" [029b3beb-4a9a-40f2-b502-9eb71b5294ca] Running
E0414 11:56:44.831488 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/kindnet-218732/client.crt: no such file or directory" logger="UnhandledError"
E0414 11:56:45.050774 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/bridge-218732/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004352716s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-78rvt" [029b3beb-4a9a-40f2-b502-9eb71b5294ca] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003118467s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-593271 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-593271 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-593271 --alsologtostderr -v=1
E0414 11:56:51.697983 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/functional-397992/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-593271 -n embed-certs-593271
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-593271 -n embed-certs-593271: exit status 2 (288.520439ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-593271 -n embed-certs-593271
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-593271 -n embed-certs-593271: exit status 2 (287.565041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-593271 --alsologtostderr -v=1
E0414 11:56:52.937724 1763595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/auto-218732/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-593271 -n embed-certs-593271
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-593271 -n embed-certs-593271
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.67s)

                                                
                                    

Test skip (27/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-295301 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-218732 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-218732

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-218732

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-218732

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-218732

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-218732

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-218732

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-218732

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-218732

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-218732

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-218732

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-218732

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-218732" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-218732" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20534-1756784/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 11:42:10 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-163378
contexts:
- context:
cluster: cert-expiration-163378
extensions:
- extension:
last-update: Mon, 14 Apr 2025 11:42:10 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-163378
name: cert-expiration-163378
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-163378
user:
client-certificate: /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/cert-expiration-163378/client.crt
client-key: /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/cert-expiration-163378/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-218732

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-218732"

                                                
                                                
----------------------- debugLogs end: kubenet-218732 [took: 3.753140051s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-218732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-218732
--- SKIP: TestNetworkPlugins/group/kubenet (3.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-218732 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-218732" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20534-1756784/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 11:42:10 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-163378
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20534-1756784/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 14 Apr 2025 11:42:41 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.94.2:8555
name: cert-options-241293
contexts:
- context:
cluster: cert-expiration-163378
extensions:
- extension:
last-update: Mon, 14 Apr 2025 11:42:10 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-163378
name: cert-expiration-163378
- context:
cluster: cert-options-241293
extensions:
- extension:
last-update: Mon, 14 Apr 2025 11:42:41 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-options-241293
name: cert-options-241293
current-context: cert-options-241293
kind: Config
preferences: {}
users:
- name: cert-expiration-163378
user:
client-certificate: /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/cert-expiration-163378/client.crt
client-key: /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/cert-expiration-163378/client.key
- name: cert-options-241293
user:
client-certificate: /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/cert-options-241293/client.crt
client-key: /home/jenkins/minikube-integration/20534-1756784/.minikube/profiles/cert-options-241293/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-218732

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-218732" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-218732"

                                                
                                                
----------------------- debugLogs end: cilium-218732 [took: 3.746767196s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-218732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-218732
--- SKIP: TestNetworkPlugins/group/cilium (3.91s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-677584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-677584
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard