Test Report: Docker_Linux_crio 20400

                    
                      62166c5b3d4846dcb8bdc6cf847b2364ca5b5915:2025-02-11:38304
                    
                

Test fail (14/324)

x
+
TestAddons/parallel/Ingress (153.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-652362 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-652362 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-652362 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2f3521aa-d8e6-4b35-8d49-8735c6a8c8a6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2f3521aa-d8e6-4b35-8d49-8735c6a8c8a6] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.00267204s
I0211 02:04:39.542715   19028 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-652362 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.578672866s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-652362 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-652362
helpers_test.go:235: (dbg) docker inspect addons-652362:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9dc8e143cb81ab6db778b64e0489107efadf6e3c219d67a22e1543b28752f000",
	        "Created": "2025-02-11T02:02:17.844634234Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 21068,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-11T02:02:17.980094755Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a",
	        "ResolvConfPath": "/var/lib/docker/containers/9dc8e143cb81ab6db778b64e0489107efadf6e3c219d67a22e1543b28752f000/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9dc8e143cb81ab6db778b64e0489107efadf6e3c219d67a22e1543b28752f000/hostname",
	        "HostsPath": "/var/lib/docker/containers/9dc8e143cb81ab6db778b64e0489107efadf6e3c219d67a22e1543b28752f000/hosts",
	        "LogPath": "/var/lib/docker/containers/9dc8e143cb81ab6db778b64e0489107efadf6e3c219d67a22e1543b28752f000/9dc8e143cb81ab6db778b64e0489107efadf6e3c219d67a22e1543b28752f000-json.log",
	        "Name": "/addons-652362",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-652362:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-652362",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9f3e86e7b8cd0a6e330d0a634daf5e9718448fbe5bf6b7bef543b1f099ca6945-init/diff:/var/lib/docker/overlay2/de28131002c1cf3ac1375d9db63a3e00d2a843930d2c723033b62dc11010311c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9f3e86e7b8cd0a6e330d0a634daf5e9718448fbe5bf6b7bef543b1f099ca6945/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9f3e86e7b8cd0a6e330d0a634daf5e9718448fbe5bf6b7bef543b1f099ca6945/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9f3e86e7b8cd0a6e330d0a634daf5e9718448fbe5bf6b7bef543b1f099ca6945/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-652362",
	                "Source": "/var/lib/docker/volumes/addons-652362/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-652362",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-652362",
	                "name.minikube.sigs.k8s.io": "addons-652362",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7118b6cfad85e4fa0f82f63e46dd9294193ec9e98345eb4daebc6a6410547380",
	            "SandboxKey": "/var/run/docker/netns/7118b6cfad85",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-652362": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "37b546f5d5e876e1a222d44da7c049cb31104e91fd3d369465d18f4824905a21",
	                    "EndpointID": "1aa1f168954e318ca1bfb59186e4896dddcb27a52703bc4e61b581f19bbcd1e4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-652362",
	                        "9dc8e143cb81"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-652362 -n addons-652362
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-652362 logs -n 25: (1.197375263s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-590741 | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC |                     |
	|         | download-docker-590741                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-590741                                                                   | download-docker-590741 | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC | 11 Feb 25 02:01 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-667903   | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC |                     |
	|         | binary-mirror-667903                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33743                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-667903                                                                     | binary-mirror-667903   | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC | 11 Feb 25 02:01 UTC |
	| addons  | disable dashboard -p                                                                        | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC |                     |
	|         | addons-652362                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC |                     |
	|         | addons-652362                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-652362 --wait=true                                                                | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC | 11 Feb 25 02:03 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-652362 addons disable                                                                | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:03 UTC | 11 Feb 25 02:03 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-652362 addons disable                                                                | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | -p addons-652362                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-652362 addons                                                                        | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-652362 addons                                                                        | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-652362 addons disable                                                                | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-652362 ip                                                                            | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	| addons  | addons-652362 addons disable                                                                | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-652362 addons disable                                                                | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-652362 addons disable                                                                | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-652362 addons                                                                        | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-652362 ssh curl -s                                                                   | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-652362 addons                                                                        | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-652362 ssh cat                                                                       | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | /opt/local-path-provisioner/pvc-403aa265-1104-4ee7-870b-3c3f736ca8be_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-652362 addons disable                                                                | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:05 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-652362 addons                                                                        | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:05 UTC | 11 Feb 25 02:05 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-652362 addons                                                                        | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:05 UTC | 11 Feb 25 02:05 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-652362 ip                                                                            | addons-652362          | jenkins | v1.35.0 | 11 Feb 25 02:06 UTC | 11 Feb 25 02:06 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/11 02:01:55
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0211 02:01:55.163098   20333 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:01:55.163187   20333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:01:55.163195   20333 out.go:358] Setting ErrFile to fd 2...
	I0211 02:01:55.163206   20333 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:01:55.163383   20333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	I0211 02:01:55.163983   20333 out.go:352] Setting JSON to false
	I0211 02:01:55.164811   20333 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2664,"bootTime":1739236651,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:01:55.164903   20333 start.go:139] virtualization: kvm guest
	I0211 02:01:55.166977   20333 out.go:177] * [addons-652362] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 02:01:55.168443   20333 notify.go:220] Checking for updates...
	I0211 02:01:55.168481   20333 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 02:01:55.169937   20333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:01:55.171343   20333 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	I0211 02:01:55.172716   20333 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	I0211 02:01:55.174105   20333 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 02:01:55.175425   20333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 02:01:55.177033   20333 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:01:55.198210   20333 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0211 02:01:55.198278   20333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:01:55.244462   20333 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:49 SystemTime:2025-02-11 02:01:55.236326434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:01:55.244565   20333 docker.go:318] overlay module found
	I0211 02:01:55.246453   20333 out.go:177] * Using the docker driver based on user configuration
	I0211 02:01:55.247781   20333 start.go:297] selected driver: docker
	I0211 02:01:55.247796   20333 start.go:901] validating driver "docker" against <nil>
	I0211 02:01:55.247807   20333 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 02:01:55.248603   20333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:01:55.294509   20333 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:49 SystemTime:2025-02-11 02:01:55.286210099 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:01:55.294647   20333 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0211 02:01:55.294908   20333 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 02:01:55.296695   20333 out.go:177] * Using Docker driver with root privileges
	I0211 02:01:55.298206   20333 cni.go:84] Creating CNI manager for ""
	I0211 02:01:55.298271   20333 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0211 02:01:55.298283   20333 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0211 02:01:55.298355   20333 start.go:340] cluster config:
	{Name:addons-652362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-652362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I0211 02:01:55.299882   20333 out.go:177] * Starting "addons-652362" primary control-plane node in "addons-652362" cluster
	I0211 02:01:55.301153   20333 cache.go:121] Beginning downloading kic base image for docker with crio
	I0211 02:01:55.302501   20333 out.go:177] * Pulling base image v0.0.46 ...
	I0211 02:01:55.303720   20333 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 02:01:55.303766   20333 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0211 02:01:55.303777   20333 cache.go:56] Caching tarball of preloaded images
	I0211 02:01:55.303838   20333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0211 02:01:55.303852   20333 preload.go:172] Found /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0211 02:01:55.303946   20333 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0211 02:01:55.304346   20333 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/config.json ...
	I0211 02:01:55.304373   20333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/config.json: {Name:mkd72cc407a56e0b3d53f64f857cb4759af19c00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:01:55.319577   20333 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0211 02:01:55.319700   20333 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0211 02:01:55.319716   20333 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0211 02:01:55.319724   20333 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0211 02:01:55.319732   20333 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0211 02:01:55.319739   20333 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from local cache
	I0211 02:02:08.246319   20333 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from cached tarball
	I0211 02:02:08.246367   20333 cache.go:230] Successfully downloaded all kic artifacts
	I0211 02:02:08.246411   20333 start.go:360] acquireMachinesLock for addons-652362: {Name:mk4be1cdb2d5ac7fd4f5511c462ebad6d1852e8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 02:02:08.246529   20333 start.go:364] duration metric: took 91.195µs to acquireMachinesLock for "addons-652362"
	I0211 02:02:08.246558   20333 start.go:93] Provisioning new machine with config: &{Name:addons-652362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-652362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 02:02:08.246666   20333 start.go:125] createHost starting for "" (driver="docker")
	I0211 02:02:08.337767   20333 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0211 02:02:08.338127   20333 start.go:159] libmachine.API.Create for "addons-652362" (driver="docker")
	I0211 02:02:08.338164   20333 client.go:168] LocalClient.Create starting
	I0211 02:02:08.338299   20333 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem
	I0211 02:02:08.471994   20333 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/cert.pem
	I0211 02:02:08.628307   20333 cli_runner.go:164] Run: docker network inspect addons-652362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0211 02:02:08.645675   20333 cli_runner.go:211] docker network inspect addons-652362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0211 02:02:08.645765   20333 network_create.go:284] running [docker network inspect addons-652362] to gather additional debugging logs...
	I0211 02:02:08.645783   20333 cli_runner.go:164] Run: docker network inspect addons-652362
	W0211 02:02:08.661429   20333 cli_runner.go:211] docker network inspect addons-652362 returned with exit code 1
	I0211 02:02:08.661480   20333 network_create.go:287] error running [docker network inspect addons-652362]: docker network inspect addons-652362: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-652362 not found
	I0211 02:02:08.661493   20333 network_create.go:289] output of [docker network inspect addons-652362]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-652362 not found
	
	** /stderr **
	I0211 02:02:08.661583   20333 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0211 02:02:08.678297   20333 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001caa780}
	I0211 02:02:08.678354   20333 network_create.go:124] attempt to create docker network addons-652362 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0211 02:02:08.678415   20333 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-652362 addons-652362
	I0211 02:02:08.918307   20333 network_create.go:108] docker network addons-652362 192.168.49.0/24 created
	I0211 02:02:08.918336   20333 kic.go:121] calculated static IP "192.168.49.2" for the "addons-652362" container
	I0211 02:02:08.918408   20333 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0211 02:02:08.933944   20333 cli_runner.go:164] Run: docker volume create addons-652362 --label name.minikube.sigs.k8s.io=addons-652362 --label created_by.minikube.sigs.k8s.io=true
	I0211 02:02:09.036733   20333 oci.go:103] Successfully created a docker volume addons-652362
	I0211 02:02:09.036835   20333 cli_runner.go:164] Run: docker run --rm --name addons-652362-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-652362 --entrypoint /usr/bin/test -v addons-652362:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0211 02:02:13.273626   20333 cli_runner.go:217] Completed: docker run --rm --name addons-652362-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-652362 --entrypoint /usr/bin/test -v addons-652362:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib: (4.236753027s)
	I0211 02:02:13.273650   20333 oci.go:107] Successfully prepared a docker volume addons-652362
	I0211 02:02:13.273665   20333 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 02:02:13.273684   20333 kic.go:194] Starting extracting preloaded images to volume ...
	I0211 02:02:13.273740   20333 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-652362:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0211 02:02:17.784032   20333 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-652362:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.510253023s)
	I0211 02:02:17.784063   20333 kic.go:203] duration metric: took 4.510375926s to extract preloaded images to volume ...
	W0211 02:02:17.784210   20333 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0211 02:02:17.784327   20333 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0211 02:02:17.829781   20333 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-652362 --name addons-652362 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-652362 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-652362 --network addons-652362 --ip 192.168.49.2 --volume addons-652362:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0211 02:02:18.174796   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Running}}
	I0211 02:02:18.192645   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:18.212276   20333 cli_runner.go:164] Run: docker exec addons-652362 stat /var/lib/dpkg/alternatives/iptables
	I0211 02:02:18.255593   20333 oci.go:144] the created container "addons-652362" has a running status.
	I0211 02:02:18.255620   20333 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa...
	I0211 02:02:18.581527   20333 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0211 02:02:18.602223   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:18.620034   20333 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0211 02:02:18.620056   20333 kic_runner.go:114] Args: [docker exec --privileged addons-652362 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0211 02:02:18.668034   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:18.689802   20333 machine.go:93] provisionDockerMachine start ...
	I0211 02:02:18.689870   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:18.707620   20333 main.go:141] libmachine: Using SSH client type: native
	I0211 02:02:18.707825   20333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0211 02:02:18.707840   20333 main.go:141] libmachine: About to run SSH command:
	hostname
	I0211 02:02:18.843460   20333 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-652362
	
	I0211 02:02:18.843490   20333 ubuntu.go:169] provisioning hostname "addons-652362"
	I0211 02:02:18.843563   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:18.862078   20333 main.go:141] libmachine: Using SSH client type: native
	I0211 02:02:18.862240   20333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0211 02:02:18.862251   20333 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-652362 && echo "addons-652362" | sudo tee /etc/hostname
	I0211 02:02:18.997972   20333 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-652362
	
	I0211 02:02:18.998064   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:19.014470   20333 main.go:141] libmachine: Using SSH client type: native
	I0211 02:02:19.014636   20333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0211 02:02:19.014652   20333 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-652362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-652362/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-652362' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0211 02:02:19.139930   20333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 02:02:19.139957   20333 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20400-12240/.minikube CaCertPath:/home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20400-12240/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20400-12240/.minikube}
	I0211 02:02:19.139977   20333 ubuntu.go:177] setting up certificates
	I0211 02:02:19.139986   20333 provision.go:84] configureAuth start
	I0211 02:02:19.140030   20333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-652362
	I0211 02:02:19.156624   20333 provision.go:143] copyHostCerts
	I0211 02:02:19.156703   20333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20400-12240/.minikube/ca.pem (1078 bytes)
	I0211 02:02:19.156820   20333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20400-12240/.minikube/cert.pem (1123 bytes)
	I0211 02:02:19.156889   20333 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20400-12240/.minikube/key.pem (1675 bytes)
	I0211 02:02:19.156938   20333 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20400-12240/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca-key.pem org=jenkins.addons-652362 san=[127.0.0.1 192.168.49.2 addons-652362 localhost minikube]
	I0211 02:02:19.320162   20333 provision.go:177] copyRemoteCerts
	I0211 02:02:19.320219   20333 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0211 02:02:19.320251   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:19.336938   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:19.428281   20333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0211 02:02:19.449993   20333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0211 02:02:19.470980   20333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0211 02:02:19.492697   20333 provision.go:87] duration metric: took 352.697491ms to configureAuth
	I0211 02:02:19.492738   20333 ubuntu.go:193] setting minikube options for container-runtime
	I0211 02:02:19.492912   20333 config.go:182] Loaded profile config "addons-652362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:02:19.493008   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:19.510094   20333 main.go:141] libmachine: Using SSH client type: native
	I0211 02:02:19.510269   20333 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0211 02:02:19.510286   20333 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0211 02:02:19.720370   20333 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0211 02:02:19.720394   20333 machine.go:96] duration metric: took 1.030573709s to provisionDockerMachine
	I0211 02:02:19.720403   20333 client.go:171] duration metric: took 11.382233943s to LocalClient.Create
	I0211 02:02:19.720416   20333 start.go:167] duration metric: took 11.38229318s to libmachine.API.Create "addons-652362"
	I0211 02:02:19.720422   20333 start.go:293] postStartSetup for "addons-652362" (driver="docker")
	I0211 02:02:19.720432   20333 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0211 02:02:19.720476   20333 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0211 02:02:19.720508   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:19.737572   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:19.828629   20333 ssh_runner.go:195] Run: cat /etc/os-release
	I0211 02:02:19.831531   20333 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0211 02:02:19.831559   20333 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0211 02:02:19.831567   20333 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0211 02:02:19.831574   20333 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0211 02:02:19.831582   20333 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12240/.minikube/addons for local assets ...
	I0211 02:02:19.831634   20333 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12240/.minikube/files for local assets ...
	I0211 02:02:19.831656   20333 start.go:296] duration metric: took 111.229078ms for postStartSetup
	I0211 02:02:19.831923   20333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-652362
	I0211 02:02:19.848993   20333 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/config.json ...
	I0211 02:02:19.849233   20333 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0211 02:02:19.849277   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:19.865539   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:19.952601   20333 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0211 02:02:19.956608   20333 start.go:128] duration metric: took 11.709927274s to createHost
	I0211 02:02:19.956639   20333 start.go:83] releasing machines lock for "addons-652362", held for 11.710096357s
	I0211 02:02:19.956738   20333 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-652362
	I0211 02:02:19.974072   20333 ssh_runner.go:195] Run: cat /version.json
	I0211 02:02:19.974112   20333 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0211 02:02:19.974119   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:19.974185   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:19.992254   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:19.992729   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:20.079651   20333 ssh_runner.go:195] Run: systemctl --version
	I0211 02:02:20.083751   20333 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0211 02:02:20.227094   20333 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0211 02:02:20.231044   20333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0211 02:02:20.248337   20333 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0211 02:02:20.248431   20333 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0211 02:02:20.273258   20333 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0211 02:02:20.273286   20333 start.go:495] detecting cgroup driver to use...
	I0211 02:02:20.273312   20333 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0211 02:02:20.273348   20333 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0211 02:02:20.286597   20333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0211 02:02:20.296027   20333 docker.go:217] disabling cri-docker service (if available) ...
	I0211 02:02:20.296074   20333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0211 02:02:20.307811   20333 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0211 02:02:20.320651   20333 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0211 02:02:20.394472   20333 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0211 02:02:20.475072   20333 docker.go:233] disabling docker service ...
	I0211 02:02:20.475141   20333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0211 02:02:20.492346   20333 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0211 02:02:20.502877   20333 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0211 02:02:20.578122   20333 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0211 02:02:20.659384   20333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0211 02:02:20.669295   20333 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0211 02:02:20.683334   20333 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0211 02:02:20.683386   20333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:02:20.691620   20333 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0211 02:02:20.691691   20333 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:02:20.700038   20333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:02:20.708436   20333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:02:20.716999   20333 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0211 02:02:20.724965   20333 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:02:20.733510   20333 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:02:20.747099   20333 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:02:20.755714   20333 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0211 02:02:20.763412   20333 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0211 02:02:20.763456   20333 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0211 02:02:20.775823   20333 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0211 02:02:20.783516   20333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 02:02:20.849812   20333 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0211 02:02:20.953365   20333 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0211 02:02:20.953437   20333 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0211 02:02:20.956736   20333 start.go:563] Will wait 60s for crictl version
	I0211 02:02:20.956789   20333 ssh_runner.go:195] Run: which crictl
	I0211 02:02:20.959743   20333 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0211 02:02:20.990993   20333 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0211 02:02:20.991108   20333 ssh_runner.go:195] Run: crio --version
	I0211 02:02:21.025169   20333 ssh_runner.go:195] Run: crio --version
	I0211 02:02:21.060650   20333 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	I0211 02:02:21.061949   20333 cli_runner.go:164] Run: docker network inspect addons-652362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0211 02:02:21.078186   20333 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0211 02:02:21.081723   20333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 02:02:21.091850   20333 kubeadm.go:883] updating cluster {Name:addons-652362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-652362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0211 02:02:21.091961   20333 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 02:02:21.092012   20333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 02:02:21.154783   20333 crio.go:514] all images are preloaded for cri-o runtime.
	I0211 02:02:21.154803   20333 crio.go:433] Images already preloaded, skipping extraction
	I0211 02:02:21.154847   20333 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 02:02:21.185080   20333 crio.go:514] all images are preloaded for cri-o runtime.
	I0211 02:02:21.185101   20333 cache_images.go:84] Images are preloaded, skipping loading
	I0211 02:02:21.185109   20333 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.1 crio true true} ...
	I0211 02:02:21.185213   20333 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-652362 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-652362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0211 02:02:21.185294   20333 ssh_runner.go:195] Run: crio config
	I0211 02:02:21.226095   20333 cni.go:84] Creating CNI manager for ""
	I0211 02:02:21.226114   20333 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0211 02:02:21.226122   20333 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0211 02:02:21.226148   20333 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-652362 NodeName:addons-652362 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0211 02:02:21.226268   20333 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-652362"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0211 02:02:21.226323   20333 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0211 02:02:21.234483   20333 binaries.go:44] Found k8s binaries, skipping transfer
	I0211 02:02:21.234549   20333 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0211 02:02:21.242076   20333 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0211 02:02:21.257387   20333 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0211 02:02:21.273367   20333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0211 02:02:21.288735   20333 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0211 02:02:21.291778   20333 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 02:02:21.301123   20333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 02:02:21.362119   20333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 02:02:21.373929   20333 certs.go:68] Setting up /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362 for IP: 192.168.49.2
	I0211 02:02:21.373954   20333 certs.go:194] generating shared ca certs ...
	I0211 02:02:21.373976   20333 certs.go:226] acquiring lock for ca certs: {Name:mk01247a5e2f34c4793d43faa12fab98d68353d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:21.374093   20333 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.key
	I0211 02:02:21.524333   20333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt ...
	I0211 02:02:21.524372   20333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt: {Name:mk31baf669f7626ab5c3faa091c60fef8650fb72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:21.524561   20333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12240/.minikube/ca.key ...
	I0211 02:02:21.524574   20333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/ca.key: {Name:mk028913702a8f1961195ad863b46488d4cea5a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:21.524672   20333 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.key
	I0211 02:02:21.649088   20333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.crt ...
	I0211 02:02:21.649120   20333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.crt: {Name:mk5b509a9a2682158d03dbf1b879b0240c8a266d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:21.649307   20333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.key ...
	I0211 02:02:21.649322   20333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.key: {Name:mka537430b1b255524523b5f1ec1cbf00a6ded14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:21.649419   20333 certs.go:256] generating profile certs ...
	I0211 02:02:21.649477   20333 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.key
	I0211 02:02:21.649489   20333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt with IP's: []
	I0211 02:02:22.000376   20333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt ...
	I0211 02:02:22.000407   20333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: {Name:mk0bdc07e8fc1555061a863258a762d580e5e787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:22.000591   20333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.key ...
	I0211 02:02:22.000604   20333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.key: {Name:mkad807763b5e479c624b7faf9abb2ebec1df4f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:22.000699   20333 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/apiserver.key.4bc567ff
	I0211 02:02:22.000719   20333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/apiserver.crt.4bc567ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0211 02:02:22.133973   20333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/apiserver.crt.4bc567ff ...
	I0211 02:02:22.134001   20333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/apiserver.crt.4bc567ff: {Name:mkc8279259cd2455770c872ccf8a11d524d459bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:22.134170   20333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/apiserver.key.4bc567ff ...
	I0211 02:02:22.134187   20333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/apiserver.key.4bc567ff: {Name:mk535b0a0c732d4c7334cd86aafed08006a9e1dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:22.134283   20333 certs.go:381] copying /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/apiserver.crt.4bc567ff -> /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/apiserver.crt
	I0211 02:02:22.134358   20333 certs.go:385] copying /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/apiserver.key.4bc567ff -> /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/apiserver.key
	I0211 02:02:22.134403   20333 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/proxy-client.key
	I0211 02:02:22.134420   20333 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/proxy-client.crt with IP's: []
	I0211 02:02:22.371126   20333 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/proxy-client.crt ...
	I0211 02:02:22.371156   20333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/proxy-client.crt: {Name:mk51cae7d5e40e2fa4eaee332995fb72b094fc5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:22.371330   20333 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/proxy-client.key ...
	I0211 02:02:22.371343   20333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/proxy-client.key: {Name:mk28730a1c2d578db946993e7c3e5befb97dd47d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:22.371542   20333 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca-key.pem (1679 bytes)
	I0211 02:02:22.371578   20333 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem (1078 bytes)
	I0211 02:02:22.371602   20333 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/cert.pem (1123 bytes)
	I0211 02:02:22.371627   20333 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/key.pem (1675 bytes)
	I0211 02:02:22.372236   20333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0211 02:02:22.393995   20333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0211 02:02:22.414972   20333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0211 02:02:22.436697   20333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0211 02:02:22.458152   20333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0211 02:02:22.479337   20333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0211 02:02:22.501338   20333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0211 02:02:22.522945   20333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0211 02:02:22.544824   20333 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0211 02:02:22.565951   20333 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0211 02:02:22.581794   20333 ssh_runner.go:195] Run: openssl version
	I0211 02:02:22.586779   20333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0211 02:02:22.595520   20333 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:02:22.598671   20333 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 11 02:02 /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:02:22.598721   20333 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:02:22.604643   20333 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0211 02:02:22.612715   20333 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0211 02:02:22.615541   20333 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0211 02:02:22.615590   20333 kubeadm.go:392] StartCluster: {Name:addons-652362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-652362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:02:22.615668   20333 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0211 02:02:22.615711   20333 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 02:02:22.647490   20333 cri.go:89] found id: ""
	I0211 02:02:22.647552   20333 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0211 02:02:22.655952   20333 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0211 02:02:22.663937   20333 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0211 02:02:22.663996   20333 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 02:02:22.671603   20333 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 02:02:22.671618   20333 kubeadm.go:157] found existing configuration files:
	
	I0211 02:02:22.671658   20333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 02:02:22.679256   20333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 02:02:22.679309   20333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 02:02:22.686622   20333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 02:02:22.694000   20333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 02:02:22.694061   20333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 02:02:22.701297   20333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 02:02:22.708637   20333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 02:02:22.708687   20333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 02:02:22.715951   20333 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 02:02:22.723454   20333 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 02:02:22.723505   20333 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 02:02:22.731171   20333 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0211 02:02:22.785363   20333 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0211 02:02:22.785677   20333 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-gcp\n", err: exit status 1
	I0211 02:02:22.841679   20333 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 02:02:31.676200   20333 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0211 02:02:31.676281   20333 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 02:02:31.676397   20333 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0211 02:02:31.676460   20333 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-gcp
	I0211 02:02:31.676489   20333 kubeadm.go:310] OS: Linux
	I0211 02:02:31.676527   20333 kubeadm.go:310] CGROUPS_CPU: enabled
	I0211 02:02:31.676570   20333 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0211 02:02:31.676653   20333 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0211 02:02:31.676707   20333 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0211 02:02:31.676756   20333 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0211 02:02:31.676801   20333 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0211 02:02:31.676840   20333 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0211 02:02:31.676881   20333 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0211 02:02:31.676939   20333 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0211 02:02:31.677031   20333 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 02:02:31.677115   20333 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 02:02:31.677203   20333 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0211 02:02:31.677297   20333 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 02:02:31.679850   20333 out.go:235]   - Generating certificates and keys ...
	I0211 02:02:31.679940   20333 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 02:02:31.680015   20333 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 02:02:31.680090   20333 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0211 02:02:31.680191   20333 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0211 02:02:31.680256   20333 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0211 02:02:31.680322   20333 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0211 02:02:31.680401   20333 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0211 02:02:31.680555   20333 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-652362 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0211 02:02:31.680612   20333 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0211 02:02:31.680726   20333 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-652362 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0211 02:02:31.680782   20333 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0211 02:02:31.680835   20333 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0211 02:02:31.680882   20333 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0211 02:02:31.680949   20333 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 02:02:31.680998   20333 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 02:02:31.681044   20333 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0211 02:02:31.681109   20333 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 02:02:31.681163   20333 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 02:02:31.681209   20333 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 02:02:31.681281   20333 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 02:02:31.681342   20333 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 02:02:31.682653   20333 out.go:235]   - Booting up control plane ...
	I0211 02:02:31.682759   20333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 02:02:31.682880   20333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 02:02:31.682984   20333 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 02:02:31.683125   20333 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 02:02:31.683210   20333 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 02:02:31.683265   20333 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 02:02:31.683437   20333 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0211 02:02:31.683560   20333 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0211 02:02:31.683655   20333 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.291546ms
	I0211 02:02:31.683783   20333 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0211 02:02:31.683865   20333 kubeadm.go:310] [api-check] The API server is healthy after 4.501689179s
	I0211 02:02:31.683965   20333 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0211 02:02:31.684085   20333 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0211 02:02:31.684168   20333 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0211 02:02:31.684333   20333 kubeadm.go:310] [mark-control-plane] Marking the node addons-652362 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0211 02:02:31.684397   20333 kubeadm.go:310] [bootstrap-token] Using token: ff0qws.tx8zum66fxou5ba3
	I0211 02:02:31.685729   20333 out.go:235]   - Configuring RBAC rules ...
	I0211 02:02:31.685824   20333 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0211 02:02:31.685901   20333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0211 02:02:31.686060   20333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0211 02:02:31.686197   20333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0211 02:02:31.686351   20333 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0211 02:02:31.686463   20333 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0211 02:02:31.686565   20333 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0211 02:02:31.686610   20333 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0211 02:02:31.686681   20333 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0211 02:02:31.686691   20333 kubeadm.go:310] 
	I0211 02:02:31.686792   20333 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0211 02:02:31.686801   20333 kubeadm.go:310] 
	I0211 02:02:31.686908   20333 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0211 02:02:31.686914   20333 kubeadm.go:310] 
	I0211 02:02:31.686941   20333 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0211 02:02:31.686993   20333 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0211 02:02:31.687046   20333 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0211 02:02:31.687052   20333 kubeadm.go:310] 
	I0211 02:02:31.687107   20333 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0211 02:02:31.687114   20333 kubeadm.go:310] 
	I0211 02:02:31.687168   20333 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0211 02:02:31.687177   20333 kubeadm.go:310] 
	I0211 02:02:31.687248   20333 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0211 02:02:31.687365   20333 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0211 02:02:31.687469   20333 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0211 02:02:31.687478   20333 kubeadm.go:310] 
	I0211 02:02:31.687553   20333 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0211 02:02:31.687622   20333 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0211 02:02:31.687629   20333 kubeadm.go:310] 
	I0211 02:02:31.687748   20333 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ff0qws.tx8zum66fxou5ba3 \
	I0211 02:02:31.687846   20333 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2585e5533b2c5436f5c33785db0dba3d71e3104cee8f0548f45ec36ce8746 \
	I0211 02:02:31.687869   20333 kubeadm.go:310] 	--control-plane 
	I0211 02:02:31.687875   20333 kubeadm.go:310] 
	I0211 02:02:31.687951   20333 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0211 02:02:31.687957   20333 kubeadm.go:310] 
	I0211 02:02:31.688032   20333 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ff0qws.tx8zum66fxou5ba3 \
	I0211 02:02:31.688161   20333 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2585e5533b2c5436f5c33785db0dba3d71e3104cee8f0548f45ec36ce8746 
	I0211 02:02:31.688173   20333 cni.go:84] Creating CNI manager for ""
	I0211 02:02:31.688179   20333 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0211 02:02:31.690394   20333 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0211 02:02:31.691522   20333 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0211 02:02:31.695070   20333 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0211 02:02:31.695085   20333 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0211 02:02:31.711167   20333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0211 02:02:31.904845   20333 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0211 02:02:31.904923   20333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:31.904941   20333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-652362 minikube.k8s.io/updated_at=2025_02_11T02_02_31_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=8e91f70b9b442caa4bec80b031add390ac34d321 minikube.k8s.io/name=addons-652362 minikube.k8s.io/primary=true
	I0211 02:02:31.911714   20333 ops.go:34] apiserver oom_adj: -16
	I0211 02:02:32.119830   20333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:32.620825   20333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:33.120614   20333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:33.620756   20333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:34.119898   20333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:34.620556   20333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:35.120781   20333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:35.620925   20333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:36.120678   20333 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:36.182243   20333 kubeadm.go:1113] duration metric: took 4.277385907s to wait for elevateKubeSystemPrivileges
	I0211 02:02:36.182281   20333 kubeadm.go:394] duration metric: took 13.566693052s to StartCluster
	I0211 02:02:36.182302   20333 settings.go:142] acquiring lock: {Name:mkab2b143b733b0f17bed345e030250b8d37f745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:36.182417   20333 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20400-12240/kubeconfig
	I0211 02:02:36.182892   20333 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/kubeconfig: {Name:mk7d609b79772e5fa84ecd6d15f2188446c79bf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:36.183094   20333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0211 02:02:36.183126   20333 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 02:02:36.183176   20333 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0211 02:02:36.183287   20333 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-652362"
	I0211 02:02:36.183301   20333 addons.go:69] Setting metrics-server=true in profile "addons-652362"
	I0211 02:02:36.183315   20333 addons.go:238] Setting addon metrics-server=true in "addons-652362"
	I0211 02:02:36.183324   20333 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-652362"
	I0211 02:02:36.183320   20333 addons.go:69] Setting inspektor-gadget=true in profile "addons-652362"
	I0211 02:02:36.183326   20333 addons.go:69] Setting default-storageclass=true in profile "addons-652362"
	I0211 02:02:36.183344   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.183345   20333 addons.go:238] Setting addon inspektor-gadget=true in "addons-652362"
	I0211 02:02:36.183351   20333 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-652362"
	I0211 02:02:36.183355   20333 config.go:182] Loaded profile config "addons-652362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:02:36.183367   20333 addons.go:69] Setting registry=true in profile "addons-652362"
	I0211 02:02:36.183377   20333 addons.go:69] Setting gcp-auth=true in profile "addons-652362"
	I0211 02:02:36.183338   20333 addons.go:69] Setting storage-provisioner=true in profile "addons-652362"
	I0211 02:02:36.183387   20333 addons.go:238] Setting addon registry=true in "addons-652362"
	I0211 02:02:36.183391   20333 addons.go:238] Setting addon storage-provisioner=true in "addons-652362"
	I0211 02:02:36.183395   20333 mustload.go:65] Loading cluster: addons-652362
	I0211 02:02:36.183358   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.183411   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.183413   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.183479   20333 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-652362"
	I0211 02:02:36.183491   20333 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-652362"
	I0211 02:02:36.183573   20333 config.go:182] Loaded profile config "addons-652362": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:02:36.183739   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.183820   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.183888   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.183923   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.183942   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.184036   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.184091   20333 addons.go:69] Setting ingress-dns=true in profile "addons-652362"
	I0211 02:02:36.184182   20333 addons.go:238] Setting addon ingress-dns=true in "addons-652362"
	I0211 02:02:36.184216   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.183290   20333 addons.go:69] Setting yakd=true in profile "addons-652362"
	I0211 02:02:36.184430   20333 addons.go:238] Setting addon yakd=true in "addons-652362"
	I0211 02:02:36.184454   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.184897   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.185083   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.183739   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.185427   20333 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-652362"
	I0211 02:02:36.183358   20333 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-652362"
	I0211 02:02:36.185482   20333 addons.go:69] Setting cloud-spanner=true in profile "addons-652362"
	I0211 02:02:36.185512   20333 addons.go:238] Setting addon cloud-spanner=true in "addons-652362"
	I0211 02:02:36.185520   20333 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-652362"
	I0211 02:02:36.185545   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.185550   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.185597   20333 addons.go:69] Setting volcano=true in profile "addons-652362"
	I0211 02:02:36.185630   20333 addons.go:238] Setting addon volcano=true in "addons-652362"
	I0211 02:02:36.185658   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.186037   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.186037   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.186377   20333 addons.go:69] Setting volumesnapshots=true in profile "addons-652362"
	I0211 02:02:36.186398   20333 addons.go:238] Setting addon volumesnapshots=true in "addons-652362"
	I0211 02:02:36.186408   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.186428   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.190592   20333 out.go:177] * Verifying Kubernetes components...
	I0211 02:02:36.183378   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.191959   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.192333   20333 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 02:02:36.183368   20333 addons.go:69] Setting ingress=true in profile "addons-652362"
	I0211 02:02:36.192455   20333 addons.go:238] Setting addon ingress=true in "addons-652362"
	I0211 02:02:36.192508   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.192999   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.199437   20333 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-652362"
	I0211 02:02:36.199492   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.199970   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.216555   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.228906   20333 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 02:02:36.230397   20333 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 02:02:36.230418   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0211 02:02:36.230478   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:36.233687   20333 addons.go:238] Setting addon default-storageclass=true in "addons-652362"
	I0211 02:02:36.233733   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.234217   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.236212   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.245948   20333 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0211 02:02:36.246084   20333 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0211 02:02:36.246140   20333 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0211 02:02:36.247481   20333 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0211 02:02:36.247503   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0211 02:02:36.247564   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:36.247730   20333 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0211 02:02:36.247742   20333 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0211 02:02:36.247951   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:36.249555   20333 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0211 02:02:36.249559   20333 out.go:177]   - Using image docker.io/registry:2.8.3
	I0211 02:02:36.250823   20333 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0211 02:02:36.250845   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0211 02:02:36.250893   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:36.260573   20333 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0211 02:02:36.260740   20333 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0211 02:02:36.260762   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0211 02:02:36.260836   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:36.262565   20333 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0211 02:02:36.262587   20333 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0211 02:02:36.262641   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:36.293975   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:36.296468   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:36.296994   20333 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-652362"
	I0211 02:02:36.297046   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:36.297493   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:36.300642   20333 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0211 02:02:36.301944   20333 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0211 02:02:36.301967   20333 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0211 02:02:36.302022   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:36.302436   20333 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0211 02:02:36.304287   20333 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0211 02:02:36.304426   20333 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0211 02:02:36.304440   20333 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0211 02:02:36.304499   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	W0211 02:02:36.307158   20333 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0211 02:02:36.307373   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:36.308334   20333 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0211 02:02:36.309707   20333 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0211 02:02:36.311117   20333 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0211 02:02:36.312327   20333 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0211 02:02:36.313617   20333 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0211 02:02:36.313636   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0211 02:02:36.313689   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:36.314271   20333 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0211 02:02:36.315581   20333 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0211 02:02:36.316206   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:36.319439   20333 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0211 02:02:36.322436   20333 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0211 02:02:36.323646   20333 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0211 02:02:36.323667   20333 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0211 02:02:36.323729   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:36.329385   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:36.331471   20333 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0211 02:02:36.332757   20333 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0211 02:02:36.332776   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0211 02:02:36.332884   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:36.339552   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:36.341656   20333 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0211 02:02:36.341676   20333 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0211 02:02:36.341738   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:36.352141   20333 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0211 02:02:36.353927   20333 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0211 02:02:36.355529   20333 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0211 02:02:36.356125   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:36.357283   20333 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0211 02:02:36.357303   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0211 02:02:36.357355   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:36.364861   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:36.367551   20333 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0211 02:02:36.367913   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:36.370382   20333 out.go:177]   - Using image docker.io/busybox:stable
	I0211 02:02:36.371616   20333 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0211 02:02:36.371634   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0211 02:02:36.371685   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:36.372083   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:36.376987   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:36.382139   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:36.383113   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:36.388742   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	W0211 02:02:36.423218   20333 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0211 02:02:36.423254   20333 retry.go:31] will retry after 309.91899ms: ssh: handshake failed: EOF
	I0211 02:02:36.518826   20333 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0211 02:02:36.526433   20333 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 02:02:36.633210   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0211 02:02:36.718295   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 02:02:36.830420   20333 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0211 02:02:36.830503   20333 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0211 02:02:36.842294   20333 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0211 02:02:36.842314   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0211 02:02:36.917680   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0211 02:02:36.918239   20333 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0211 02:02:36.918260   20333 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0211 02:02:36.923396   20333 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0211 02:02:36.923422   20333 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0211 02:02:36.927273   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0211 02:02:36.931691   20333 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0211 02:02:36.931718   20333 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0211 02:02:36.936056   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0211 02:02:36.937667   20333 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0211 02:02:36.937708   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0211 02:02:37.018123   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0211 02:02:37.030403   20333 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0211 02:02:37.030435   20333 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0211 02:02:37.119437   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0211 02:02:37.136293   20333 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0211 02:02:37.136318   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0211 02:02:37.218446   20333 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0211 02:02:37.218535   20333 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0211 02:02:37.218842   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0211 02:02:37.317994   20333 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0211 02:02:37.318028   20333 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0211 02:02:37.418516   20333 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0211 02:02:37.418543   20333 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0211 02:02:37.429433   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0211 02:02:37.438426   20333 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0211 02:02:37.438452   20333 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0211 02:02:37.517762   20333 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0211 02:02:37.517796   20333 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0211 02:02:37.519908   20333 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0211 02:02:37.519933   20333 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0211 02:02:37.723096   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0211 02:02:37.738017   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0211 02:02:37.832119   20333 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0211 02:02:37.832146   20333 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0211 02:02:37.832739   20333 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.313879686s)
	I0211 02:02:37.832765   20333 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0211 02:02:37.833968   20333 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.307506755s)
	I0211 02:02:37.834765   20333 node_ready.go:35] waiting up to 6m0s for node "addons-652362" to be "Ready" ...
	I0211 02:02:37.924524   20333 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0211 02:02:37.924556   20333 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0211 02:02:37.928128   20333 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0211 02:02:37.928153   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0211 02:02:38.227756   20333 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0211 02:02:38.227841   20333 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0211 02:02:38.417590   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0211 02:02:38.619917   20333 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0211 02:02:38.620028   20333 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0211 02:02:38.630273   20333 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-652362" context rescaled to 1 replicas
	I0211 02:02:38.730239   20333 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0211 02:02:38.730326   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0211 02:02:38.735794   20333 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0211 02:02:38.735875   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0211 02:02:38.917590   20333 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0211 02:02:38.917680   20333 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0211 02:02:39.127622   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0211 02:02:39.230941   20333 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0211 02:02:39.230983   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0211 02:02:39.432257   20333 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0211 02:02:39.432299   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0211 02:02:39.728121   20333 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0211 02:02:39.728208   20333 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0211 02:02:39.836836   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.203584004s)
	I0211 02:02:39.917785   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0211 02:02:39.922197   20333 node_ready.go:53] node "addons-652362" has status "Ready":"False"
	I0211 02:02:40.417862   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.699523122s)
	I0211 02:02:40.418038   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.50031187s)
	I0211 02:02:40.418122   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.490818169s)
	I0211 02:02:40.418454   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.482367826s)
	I0211 02:02:40.418541   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.400387325s)
	I0211 02:02:41.335149   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.215661574s)
	I0211 02:02:42.417629   20333 node_ready.go:53] node "addons-652362" has status "Ready":"False"
	I0211 02:02:42.437881   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.714678259s)
	I0211 02:02:42.438034   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.699979679s)
	I0211 02:02:42.438067   20333 addons.go:479] Verifying addon metrics-server=true in "addons-652362"
	I0211 02:02:42.438119   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.020477171s)
	I0211 02:02:42.438183   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.008322807s)
	I0211 02:02:42.438202   20333 addons.go:479] Verifying addon registry=true in "addons-652362"
	I0211 02:02:42.438277   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.219413043s)
	I0211 02:02:42.438427   20333 addons.go:479] Verifying addon ingress=true in "addons-652362"
	I0211 02:02:42.439787   20333 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-652362 service yakd-dashboard -n yakd-dashboard
	
	I0211 02:02:42.440706   20333 out.go:177] * Verifying registry addon...
	I0211 02:02:42.440752   20333 out.go:177] * Verifying ingress addon...
	I0211 02:02:42.518473   20333 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0211 02:02:42.518904   20333 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0211 02:02:42.521678   20333 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0211 02:02:42.521728   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:42.522236   20333 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0211 02:02:42.522254   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:43.022506   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:43.023274   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:43.242419   20333 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0211 02:02:43.242493   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:43.259420   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:43.417123   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.289454676s)
	W0211 02:02:43.417178   20333 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0211 02:02:43.417204   20333 retry.go:31] will retry after 315.162788ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0211 02:02:43.521853   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:43.521946   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:43.536687   20333 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0211 02:02:43.558121   20333 addons.go:238] Setting addon gcp-auth=true in "addons-652362"
	I0211 02:02:43.558175   20333 host.go:66] Checking if "addons-652362" exists ...
	I0211 02:02:43.558572   20333 cli_runner.go:164] Run: docker container inspect addons-652362 --format={{.State.Status}}
	I0211 02:02:43.575367   20333 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0211 02:02:43.575422   20333 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-652362
	I0211 02:02:43.593903   20333 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/addons-652362/id_rsa Username:docker}
	I0211 02:02:43.733422   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0211 02:02:44.016943   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.099045199s)
	I0211 02:02:44.017002   20333 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-652362"
	I0211 02:02:44.018933   20333 out.go:177] * Verifying csi-hostpath-driver addon...
	I0211 02:02:44.021891   20333 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0211 02:02:44.022140   20333 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0211 02:02:44.023950   20333 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0211 02:02:44.024483   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:44.025168   20333 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0211 02:02:44.025197   20333 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0211 02:02:44.027138   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:44.027192   20333 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0211 02:02:44.027212   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:44.045633   20333 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0211 02:02:44.045658   20333 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0211 02:02:44.063745   20333 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0211 02:02:44.063764   20333 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0211 02:02:44.135886   20333 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0211 02:02:44.522514   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:44.548234   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:44.548268   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:44.837845   20333 node_ready.go:53] node "addons-652362" has status "Ready":"False"
	I0211 02:02:45.022522   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:45.022692   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:45.023741   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:45.521923   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:45.522135   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:45.523611   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:46.021537   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:46.021700   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:46.024149   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:46.522246   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:46.522309   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:46.523879   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:46.589474   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.85600317s)
	I0211 02:02:46.589566   20333 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.453644269s)
	I0211 02:02:46.590507   20333 addons.go:479] Verifying addon gcp-auth=true in "addons-652362"
	I0211 02:02:46.592456   20333 out.go:177] * Verifying gcp-auth addon...
	I0211 02:02:46.594441   20333 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0211 02:02:46.596909   20333 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0211 02:02:46.596928   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:47.022472   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:47.022712   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:47.024496   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:47.096882   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:47.337378   20333 node_ready.go:53] node "addons-652362" has status "Ready":"False"
	I0211 02:02:47.522167   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:47.522252   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:47.523562   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:47.597334   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:48.021153   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:48.021290   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:48.024263   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:48.097807   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:48.521995   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:48.522141   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:48.523758   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:48.597433   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:49.021814   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:49.021990   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:49.023847   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:49.097445   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:49.337884   20333 node_ready.go:53] node "addons-652362" has status "Ready":"False"
	I0211 02:02:49.521491   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:49.521535   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:49.523877   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:49.597142   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:50.021167   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:50.021420   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:50.024221   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:50.098139   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:50.521176   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:50.521175   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:50.523584   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:50.597061   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:51.022373   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:51.022510   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:51.023819   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:51.097324   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:51.521816   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:51.521893   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:51.524559   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:51.596872   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:51.837260   20333 node_ready.go:53] node "addons-652362" has status "Ready":"False"
	I0211 02:02:52.021813   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:52.021957   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:52.023755   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:52.097233   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:52.521213   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:52.521233   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:52.523880   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:52.597476   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:53.021688   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:53.021859   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:53.024406   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:53.097836   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:53.521617   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:53.521820   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:53.524258   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:53.597531   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:53.838163   20333 node_ready.go:53] node "addons-652362" has status "Ready":"False"
	I0211 02:02:54.021480   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:54.021699   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:54.024024   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:54.097461   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:54.521444   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:54.521613   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:54.523954   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:54.597242   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:55.021582   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:55.021715   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:55.024535   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:55.126545   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:55.340612   20333 node_ready.go:49] node "addons-652362" has status "Ready":"True"
	I0211 02:02:55.340642   20333 node_ready.go:38] duration metric: took 17.505842092s for node "addons-652362" to be "Ready" ...
	I0211 02:02:55.340658   20333 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 02:02:55.343770   20333 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace to be "Ready" ...
	I0211 02:02:55.524049   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:55.524748   20333 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0211 02:02:55.524809   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:55.525266   20333 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0211 02:02:55.525283   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:55.617139   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:56.021836   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:56.022095   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:56.024201   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:56.122660   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:56.521607   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:56.521893   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:56.524692   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:56.596986   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:57.022340   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:57.022384   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:57.023905   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:57.123178   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:57.349622   20333 pod_ready.go:103] pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:02:57.521312   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:57.521398   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:57.524332   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:57.597590   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:58.021995   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:58.022054   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:58.023859   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:58.122471   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:58.522327   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:58.522396   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:58.523982   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:58.597360   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:59.021743   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:59.021773   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:59.024999   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:59.117674   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:59.522253   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:59.522333   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:59.524245   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:59.618568   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:02:59.848691   20333 pod_ready.go:103] pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:00.022108   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:00.022509   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:00.024410   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:00.122449   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:00.521514   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:00.521647   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:00.524511   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:00.597837   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:01.022307   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:01.022398   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:01.024362   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:01.097863   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:01.521912   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:01.522032   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:01.524078   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:01.597194   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:01.849155   20333 pod_ready.go:103] pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:02.022106   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:02.022178   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:02.023831   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:02.123779   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:02.522025   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:02.522231   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:02.524197   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:02.597339   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:03.021649   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:03.021671   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:03.024664   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:03.097184   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:03.521718   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:03.521891   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:03.524641   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:03.596924   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:04.022429   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:04.022437   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:04.023877   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:04.123454   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:04.349577   20333 pod_ready.go:103] pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:04.521772   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:04.521860   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:04.524996   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:04.597313   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:05.021894   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:05.021906   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:05.024176   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:05.097627   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:05.523386   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:05.523395   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:05.525733   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:05.596750   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:06.021712   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:06.022077   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:06.024095   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:06.118104   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:06.521918   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:06.521954   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:06.524215   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:06.597588   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:06.849475   20333 pod_ready.go:103] pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:07.022450   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:07.022450   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:07.024249   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:07.097658   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:07.521645   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:07.521669   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:07.524779   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:07.619796   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:08.023402   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:08.023655   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:08.024572   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:08.118848   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:08.521373   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:08.521818   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:08.525462   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:08.618235   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:09.021420   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:09.022068   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:09.024259   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:09.097739   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:09.348626   20333 pod_ready.go:103] pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:09.521976   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:09.522320   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:09.524245   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:09.597174   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:10.022074   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:10.022230   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:10.024122   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:10.122143   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:10.523287   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:10.523545   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:10.524153   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:10.597317   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:11.021645   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:11.021860   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:11.025365   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:11.117939   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:11.348667   20333 pod_ready.go:103] pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:11.521528   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:11.521688   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:11.524624   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:11.622591   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:12.021475   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:12.021790   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:12.024664   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:12.096916   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:12.521862   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:12.521886   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:12.524836   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:12.597344   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:13.022373   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:13.022444   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:13.023915   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:13.097623   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:13.348862   20333 pod_ready.go:103] pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:13.521485   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:13.521488   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:13.524297   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:13.597890   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:14.021923   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:14.021960   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:14.024338   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:14.097737   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:14.521405   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:14.521429   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:14.524286   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:14.597732   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:15.022061   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:15.022165   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:15.024017   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:15.117779   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:15.349072   20333 pod_ready.go:103] pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:15.523416   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:15.523471   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:15.524863   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:15.618931   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:16.022079   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:16.022122   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:16.024422   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:16.118540   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:16.522410   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:16.522512   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:16.524095   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:16.598160   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:17.022458   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:17.022569   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:17.024266   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:17.097674   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:17.349222   20333 pod_ready.go:103] pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:17.522488   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:17.522586   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:17.524402   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:17.622717   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:18.022404   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:18.022430   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:18.024133   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:18.097531   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:18.522532   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:18.522782   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:18.525246   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:18.618886   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:19.021371   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:19.021986   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:19.024013   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:19.097347   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:19.349413   20333 pod_ready.go:103] pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:19.522333   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:19.522388   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:19.524331   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:19.598177   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:20.021615   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:20.021995   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:20.023927   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:20.122187   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:20.522539   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:20.522711   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:20.622404   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:20.628539   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:21.022213   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:21.022265   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:21.024258   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:21.097359   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:21.521534   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:21.521561   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:21.524399   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:21.597486   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:21.849096   20333 pod_ready.go:103] pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:22.022397   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:22.022597   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:22.024364   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:22.097612   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:22.349137   20333 pod_ready.go:93] pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:22.349161   20333 pod_ready.go:82] duration metric: took 27.00536582s for pod "amd-gpu-device-plugin-nxm8m" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:22.349175   20333 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-wjjv8" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:22.352558   20333 pod_ready.go:93] pod "coredns-668d6bf9bc-wjjv8" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:22.352576   20333 pod_ready.go:82] duration metric: took 3.394575ms for pod "coredns-668d6bf9bc-wjjv8" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:22.352594   20333 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-652362" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:22.355979   20333 pod_ready.go:93] pod "etcd-addons-652362" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:22.355998   20333 pod_ready.go:82] duration metric: took 3.397144ms for pod "etcd-addons-652362" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:22.356024   20333 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-652362" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:22.359106   20333 pod_ready.go:93] pod "kube-apiserver-addons-652362" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:22.359124   20333 pod_ready.go:82] duration metric: took 3.089681ms for pod "kube-apiserver-addons-652362" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:22.359135   20333 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-652362" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:22.362349   20333 pod_ready.go:93] pod "kube-controller-manager-addons-652362" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:22.362366   20333 pod_ready.go:82] duration metric: took 3.223609ms for pod "kube-controller-manager-addons-652362" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:22.362380   20333 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ltsnp" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:22.522442   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:22.522470   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:22.524158   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:22.622743   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:22.748250   20333 pod_ready.go:93] pod "kube-proxy-ltsnp" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:22.748276   20333 pod_ready.go:82] duration metric: took 385.886803ms for pod "kube-proxy-ltsnp" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:22.748288   20333 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-652362" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:23.021447   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:23.021582   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:23.024418   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:23.097842   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:23.148502   20333 pod_ready.go:93] pod "kube-scheduler-addons-652362" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:23.148524   20333 pod_ready.go:82] duration metric: took 400.228372ms for pod "kube-scheduler-addons-652362" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:23.148534   20333 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-9pqgg" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:23.522369   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:23.522582   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:23.524059   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:23.548042   20333 pod_ready.go:93] pod "metrics-server-7fbb699795-9pqgg" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:23.548064   20333 pod_ready.go:82] duration metric: took 399.52381ms for pod "metrics-server-7fbb699795-9pqgg" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:23.548074   20333 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-hdmx2" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:23.597313   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:24.022132   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:24.022327   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:24.024052   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:24.097843   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:24.522105   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:24.522345   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:24.524825   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:24.619075   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:25.023702   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:25.024072   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:25.025303   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:25.120619   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:25.521514   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:25.522990   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:25.524993   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:25.619893   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:25.622433   20333 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-hdmx2" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:26.023413   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:26.024540   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:26.025859   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:26.120409   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:26.523028   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:26.523166   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:26.524652   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:26.617735   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:27.021653   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:27.021757   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:27.025193   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:27.118757   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:27.521718   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:27.521850   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:27.524970   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:27.618085   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:28.022125   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:28.022195   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:28.024139   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:28.053782   20333 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-hdmx2" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:28.097655   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:28.521689   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:28.521709   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:28.524597   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:28.617713   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:29.022272   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:29.022293   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:29.024175   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:29.097360   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:29.521632   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:29.521644   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:29.524824   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:29.617950   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:30.022047   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:30.022188   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:30.024090   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:30.119185   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:30.523012   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:30.523089   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:30.525455   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:30.552844   20333 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-hdmx2" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:30.597278   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:31.021726   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:31.022184   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:31.024060   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:31.117345   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:31.521477   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:31.521506   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:31.524668   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:31.597240   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:32.022582   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:32.022752   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:32.024276   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:32.097274   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:32.522598   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:32.522638   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:32.524302   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:32.555478   20333 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-hdmx2" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:32.597900   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:33.022040   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:33.022094   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:33.024190   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:33.097509   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:33.521769   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:33.521884   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:33.524511   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:33.552802   20333 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-hdmx2" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:33.552827   20333 pod_ready.go:82] duration metric: took 10.0047454s for pod "nvidia-device-plugin-daemonset-hdmx2" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:33.552852   20333 pod_ready.go:39] duration metric: took 38.212157051s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 02:03:33.552876   20333 api_server.go:52] waiting for apiserver process to appear ...
	I0211 02:03:33.552935   20333 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 02:03:33.565015   20333 api_server.go:72] duration metric: took 57.381849218s to wait for apiserver process to appear ...
	I0211 02:03:33.565038   20333 api_server.go:88] waiting for apiserver healthz status ...
	I0211 02:03:33.565055   20333 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0211 02:03:33.569702   20333 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0211 02:03:33.570451   20333 api_server.go:141] control plane version: v1.32.1
	I0211 02:03:33.570473   20333 api_server.go:131] duration metric: took 5.427885ms to wait for apiserver health ...
	I0211 02:03:33.570482   20333 system_pods.go:43] waiting for kube-system pods to appear ...
	I0211 02:03:33.573120   20333 system_pods.go:59] 19 kube-system pods found
	I0211 02:03:33.573144   20333 system_pods.go:61] "amd-gpu-device-plugin-nxm8m" [1d468a5e-64fc-49d1-8894-a802a0e9ebca] Running
	I0211 02:03:33.573150   20333 system_pods.go:61] "coredns-668d6bf9bc-wjjv8" [7ec467bf-e3ad-4de0-b82a-6d8190f2bd12] Running
	I0211 02:03:33.573154   20333 system_pods.go:61] "csi-hostpath-attacher-0" [98bf4ed1-c584-42be-8d0a-f8d3e3e3d6d5] Running
	I0211 02:03:33.573157   20333 system_pods.go:61] "csi-hostpath-resizer-0" [4fff01fd-b1a7-409f-a26f-6d274eef5cf4] Running
	I0211 02:03:33.573166   20333 system_pods.go:61] "csi-hostpathplugin-nc7p4" [9bf14825-efbd-420e-9ca9-4409aab92d42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0211 02:03:33.573171   20333 system_pods.go:61] "etcd-addons-652362" [fce8a3bb-a297-41c4-8b24-b13ddb3d07d7] Running
	I0211 02:03:33.573179   20333 system_pods.go:61] "kindnet-g6pgz" [7c234f9c-82bb-4a27-b1c7-8908331b28ad] Running
	I0211 02:03:33.573183   20333 system_pods.go:61] "kube-apiserver-addons-652362" [5b3afabf-c8f1-4973-8125-6c312de52925] Running
	I0211 02:03:33.573187   20333 system_pods.go:61] "kube-controller-manager-addons-652362" [0afe66d8-a7e0-4162-8e9e-f354d511db37] Running
	I0211 02:03:33.573195   20333 system_pods.go:61] "kube-ingress-dns-minikube" [f81544c0-9288-4dbe-a235-7071cbdcfe65] Running
	I0211 02:03:33.573198   20333 system_pods.go:61] "kube-proxy-ltsnp" [d192b150-d755-4716-aa77-59dd778a6028] Running
	I0211 02:03:33.573201   20333 system_pods.go:61] "kube-scheduler-addons-652362" [50a3f695-c622-40bf-87c4-c46ee504cf48] Running
	I0211 02:03:33.573205   20333 system_pods.go:61] "metrics-server-7fbb699795-9pqgg" [70491abf-576e-4b84-8626-cc4d3735e6df] Running
	I0211 02:03:33.573208   20333 system_pods.go:61] "nvidia-device-plugin-daemonset-hdmx2" [daa5b722-96f2-4bec-b731-9603806ec3fa] Running
	I0211 02:03:33.573212   20333 system_pods.go:61] "registry-6c88467877-7vlrg" [34653488-a8f9-4101-bc06-960cfcdc4ff1] Running
	I0211 02:03:33.573216   20333 system_pods.go:61] "registry-proxy-d9448" [42a4d4a0-7f74-47a5-9bcd-e482b88b201b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0211 02:03:33.573222   20333 system_pods.go:61] "snapshot-controller-68b874b76f-l5h2c" [b5849d99-a135-4cff-805c-75b535210469] Running
	I0211 02:03:33.573226   20333 system_pods.go:61] "snapshot-controller-68b874b76f-vbjmm" [33f52f8c-ec15-49b7-8567-15ccadbaaba6] Running
	I0211 02:03:33.573230   20333 system_pods.go:61] "storage-provisioner" [589d63d9-b0b6-4543-9b05-574e06b0f77f] Running
	I0211 02:03:33.573235   20333 system_pods.go:74] duration metric: took 2.747738ms to wait for pod list to return data ...
	I0211 02:03:33.573244   20333 default_sa.go:34] waiting for default service account to be created ...
	I0211 02:03:33.575146   20333 default_sa.go:45] found service account: "default"
	I0211 02:03:33.575163   20333 default_sa.go:55] duration metric: took 1.913382ms for default service account to be created ...
	I0211 02:03:33.575172   20333 system_pods.go:116] waiting for k8s-apps to be running ...
	I0211 02:03:33.577936   20333 system_pods.go:86] 19 kube-system pods found
	I0211 02:03:33.577961   20333 system_pods.go:89] "amd-gpu-device-plugin-nxm8m" [1d468a5e-64fc-49d1-8894-a802a0e9ebca] Running
	I0211 02:03:33.577969   20333 system_pods.go:89] "coredns-668d6bf9bc-wjjv8" [7ec467bf-e3ad-4de0-b82a-6d8190f2bd12] Running
	I0211 02:03:33.577975   20333 system_pods.go:89] "csi-hostpath-attacher-0" [98bf4ed1-c584-42be-8d0a-f8d3e3e3d6d5] Running
	I0211 02:03:33.577992   20333 system_pods.go:89] "csi-hostpath-resizer-0" [4fff01fd-b1a7-409f-a26f-6d274eef5cf4] Running
	I0211 02:03:33.578008   20333 system_pods.go:89] "csi-hostpathplugin-nc7p4" [9bf14825-efbd-420e-9ca9-4409aab92d42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0211 02:03:33.578014   20333 system_pods.go:89] "etcd-addons-652362" [fce8a3bb-a297-41c4-8b24-b13ddb3d07d7] Running
	I0211 02:03:33.578022   20333 system_pods.go:89] "kindnet-g6pgz" [7c234f9c-82bb-4a27-b1c7-8908331b28ad] Running
	I0211 02:03:33.578035   20333 system_pods.go:89] "kube-apiserver-addons-652362" [5b3afabf-c8f1-4973-8125-6c312de52925] Running
	I0211 02:03:33.578041   20333 system_pods.go:89] "kube-controller-manager-addons-652362" [0afe66d8-a7e0-4162-8e9e-f354d511db37] Running
	I0211 02:03:33.578047   20333 system_pods.go:89] "kube-ingress-dns-minikube" [f81544c0-9288-4dbe-a235-7071cbdcfe65] Running
	I0211 02:03:33.578056   20333 system_pods.go:89] "kube-proxy-ltsnp" [d192b150-d755-4716-aa77-59dd778a6028] Running
	I0211 02:03:33.578062   20333 system_pods.go:89] "kube-scheduler-addons-652362" [50a3f695-c622-40bf-87c4-c46ee504cf48] Running
	I0211 02:03:33.578069   20333 system_pods.go:89] "metrics-server-7fbb699795-9pqgg" [70491abf-576e-4b84-8626-cc4d3735e6df] Running
	I0211 02:03:33.578074   20333 system_pods.go:89] "nvidia-device-plugin-daemonset-hdmx2" [daa5b722-96f2-4bec-b731-9603806ec3fa] Running
	I0211 02:03:33.578081   20333 system_pods.go:89] "registry-6c88467877-7vlrg" [34653488-a8f9-4101-bc06-960cfcdc4ff1] Running
	I0211 02:03:33.578089   20333 system_pods.go:89] "registry-proxy-d9448" [42a4d4a0-7f74-47a5-9bcd-e482b88b201b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0211 02:03:33.578097   20333 system_pods.go:89] "snapshot-controller-68b874b76f-l5h2c" [b5849d99-a135-4cff-805c-75b535210469] Running
	I0211 02:03:33.578107   20333 system_pods.go:89] "snapshot-controller-68b874b76f-vbjmm" [33f52f8c-ec15-49b7-8567-15ccadbaaba6] Running
	I0211 02:03:33.578112   20333 system_pods.go:89] "storage-provisioner" [589d63d9-b0b6-4543-9b05-574e06b0f77f] Running
	I0211 02:03:33.578124   20333 system_pods.go:126] duration metric: took 2.945819ms to wait for k8s-apps to be running ...
	I0211 02:03:33.578135   20333 system_svc.go:44] waiting for kubelet service to be running ....
	I0211 02:03:33.578185   20333 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 02:03:33.589458   20333 system_svc.go:56] duration metric: took 11.316746ms WaitForService to wait for kubelet
	I0211 02:03:33.589482   20333 kubeadm.go:582] duration metric: took 57.406322879s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 02:03:33.589504   20333 node_conditions.go:102] verifying NodePressure condition ...
	I0211 02:03:33.591668   20333 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0211 02:03:33.591689   20333 node_conditions.go:123] node cpu capacity is 8
	I0211 02:03:33.591700   20333 node_conditions.go:105] duration metric: took 2.191558ms to run NodePressure ...
	I0211 02:03:33.591711   20333 start.go:241] waiting for startup goroutines ...
	I0211 02:03:33.596434   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:34.021647   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:34.021912   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:34.024221   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:34.117883   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:34.522349   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:34.522466   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:34.524083   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:34.597217   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:35.022530   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:35.022633   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:35.024718   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:35.097498   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:35.522068   20333 kapi.go:107] duration metric: took 53.003160895s to wait for kubernetes.io/minikube-addons=registry ...
	I0211 02:03:35.522091   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:35.523913   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:35.597162   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:36.021954   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:36.024290   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:36.097567   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:36.522613   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:36.525522   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:36.618543   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:37.021117   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:37.024064   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:37.097377   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:37.521289   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:37.524411   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:37.597945   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:38.021636   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:38.024953   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:38.097318   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:38.522767   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:38.524873   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:38.597291   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:39.021413   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:39.024352   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:39.121441   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:39.522652   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:39.524551   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:39.617713   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:40.021736   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:40.024722   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:40.097280   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:40.523509   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:40.623791   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:40.623895   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:41.022486   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:41.025756   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:41.118744   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:41.523074   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:41.525102   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:41.622530   20333 kapi.go:107] duration metric: took 55.028082086s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0211 02:03:41.624371   20333 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-652362 cluster.
	I0211 02:03:41.626288   20333 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0211 02:03:41.627640   20333 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0211 02:03:42.021614   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:42.025376   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:42.521641   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:42.525022   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:43.023392   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:43.025120   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:43.522832   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:43.525031   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:44.021602   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:44.024674   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:44.522019   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:44.524268   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:45.022840   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:45.024766   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:45.522614   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:45.524692   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:46.021628   20333 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:46.025070   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:46.522449   20333 kapi.go:107] duration metric: took 1m4.003978979s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0211 02:03:46.524236   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:47.026115   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:47.620669   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:48.025620   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:48.525311   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:49.025959   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:49.525802   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:50.024624   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:50.525468   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:51.025503   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:51.524651   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:52.024920   20333 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:52.525959   20333 kapi.go:107] duration metric: took 1m8.504070241s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0211 02:03:52.527845   20333 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, cloud-spanner, amd-gpu-device-plugin, default-storageclass, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0211 02:03:52.529308   20333 addons.go:514] duration metric: took 1m16.346131114s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin cloud-spanner amd-gpu-device-plugin default-storageclass inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0211 02:03:52.529366   20333 start.go:246] waiting for cluster config update ...
	I0211 02:03:52.529393   20333 start.go:255] writing updated cluster config ...
	I0211 02:03:52.529710   20333 ssh_runner.go:195] Run: rm -f paused
	I0211 02:03:52.585844   20333 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0211 02:03:52.587531   20333 out.go:177] * Done! kubectl is now configured to use "addons-652362" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 11 02:05:31 addons-652362 crio[1041]: time="2025-02-11 02:05:31.772884426Z" level=info msg="Removed pod sandbox: 85c8c8a331d6996be23774eabff6ca5a15809d84a0fbc584a7987b53d00978ad" id=2921c978-20c9-4f6b-bf40-49279ffcfdec name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 11 02:05:31 addons-652362 crio[1041]: time="2025-02-11 02:05:31.773237989Z" level=info msg="Stopping pod sandbox: 0a708bf1dd6bc91648da3dd5150430acbb013d2fbd35be83b6bd8e86463f7af6" id=acb91ffa-b861-4b19-9ee3-d418b451c91e name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 11 02:05:31 addons-652362 crio[1041]: time="2025-02-11 02:05:31.773285535Z" level=info msg="Stopped pod sandbox (already stopped): 0a708bf1dd6bc91648da3dd5150430acbb013d2fbd35be83b6bd8e86463f7af6" id=acb91ffa-b861-4b19-9ee3-d418b451c91e name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 11 02:05:31 addons-652362 crio[1041]: time="2025-02-11 02:05:31.773526720Z" level=info msg="Removing pod sandbox: 0a708bf1dd6bc91648da3dd5150430acbb013d2fbd35be83b6bd8e86463f7af6" id=10b7d952-d49b-446f-bbf6-758552d6a461 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 11 02:05:31 addons-652362 crio[1041]: time="2025-02-11 02:05:31.779820491Z" level=info msg="Removed pod sandbox: 0a708bf1dd6bc91648da3dd5150430acbb013d2fbd35be83b6bd8e86463f7af6" id=10b7d952-d49b-446f-bbf6-758552d6a461 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 11 02:05:31 addons-652362 crio[1041]: time="2025-02-11 02:05:31.780170476Z" level=info msg="Stopping pod sandbox: d0a2953a14334e2ac9939d3bbf800ac54aae94c0e0d0f64d82dad68ab9d5f7c8" id=c6c9b6cb-636e-4927-ac18-129e15c62316 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 11 02:05:31 addons-652362 crio[1041]: time="2025-02-11 02:05:31.780204738Z" level=info msg="Stopped pod sandbox (already stopped): d0a2953a14334e2ac9939d3bbf800ac54aae94c0e0d0f64d82dad68ab9d5f7c8" id=c6c9b6cb-636e-4927-ac18-129e15c62316 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 11 02:05:31 addons-652362 crio[1041]: time="2025-02-11 02:05:31.780482127Z" level=info msg="Removing pod sandbox: d0a2953a14334e2ac9939d3bbf800ac54aae94c0e0d0f64d82dad68ab9d5f7c8" id=59f53709-a900-47d5-9f1e-d5574208462f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 11 02:05:31 addons-652362 crio[1041]: time="2025-02-11 02:05:31.786298292Z" level=info msg="Removed pod sandbox: d0a2953a14334e2ac9939d3bbf800ac54aae94c0e0d0f64d82dad68ab9d5f7c8" id=59f53709-a900-47d5-9f1e-d5574208462f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 11 02:05:31 addons-652362 crio[1041]: time="2025-02-11 02:05:31.786616366Z" level=info msg="Stopping pod sandbox: 301ecb9a4b899b4ddeb28546caffaea7be8aaf8802d7c3560763b952b29652a1" id=4e56a279-a6fa-40f9-923f-64a8700ba18a name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 11 02:05:31 addons-652362 crio[1041]: time="2025-02-11 02:05:31.786642186Z" level=info msg="Stopped pod sandbox (already stopped): 301ecb9a4b899b4ddeb28546caffaea7be8aaf8802d7c3560763b952b29652a1" id=4e56a279-a6fa-40f9-923f-64a8700ba18a name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 11 02:05:31 addons-652362 crio[1041]: time="2025-02-11 02:05:31.786970071Z" level=info msg="Removing pod sandbox: 301ecb9a4b899b4ddeb28546caffaea7be8aaf8802d7c3560763b952b29652a1" id=cf8db5f9-cecf-417d-a828-de037d2f0303 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 11 02:05:31 addons-652362 crio[1041]: time="2025-02-11 02:05:31.793782933Z" level=info msg="Removed pod sandbox: 301ecb9a4b899b4ddeb28546caffaea7be8aaf8802d7c3560763b952b29652a1" id=cf8db5f9-cecf-417d-a828-de037d2f0303 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 11 02:06:50 addons-652362 crio[1041]: time="2025-02-11 02:06:50.550982785Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-t9m6g/POD" id=094a9bbc-5c23-4039-8532-817dc3d7bdd1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 11 02:06:50 addons-652362 crio[1041]: time="2025-02-11 02:06:50.551055511Z" level=warning msg="Allowed annotations are specified for workload []"
	Feb 11 02:06:50 addons-652362 crio[1041]: time="2025-02-11 02:06:50.574338634Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-t9m6g Namespace:default ID:8d7a43333b56fd732f3b9de5906d1eecf32a8bb98e40bfef67da56a68e4eac9e UID:07ec65ac-1cdc-4046-a73f-8bbf8cb983a9 NetNS:/var/run/netns/27048017-e0fa-4227-b9ca-71163af6bf9d Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 11 02:06:50 addons-652362 crio[1041]: time="2025-02-11 02:06:50.574373538Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-t9m6g to CNI network \"kindnet\" (type=ptp)"
	Feb 11 02:06:50 addons-652362 crio[1041]: time="2025-02-11 02:06:50.619652697Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-t9m6g Namespace:default ID:8d7a43333b56fd732f3b9de5906d1eecf32a8bb98e40bfef67da56a68e4eac9e UID:07ec65ac-1cdc-4046-a73f-8bbf8cb983a9 NetNS:/var/run/netns/27048017-e0fa-4227-b9ca-71163af6bf9d Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Feb 11 02:06:50 addons-652362 crio[1041]: time="2025-02-11 02:06:50.619787879Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-t9m6g for CNI network kindnet (type=ptp)"
	Feb 11 02:06:50 addons-652362 crio[1041]: time="2025-02-11 02:06:50.622302438Z" level=info msg="Ran pod sandbox 8d7a43333b56fd732f3b9de5906d1eecf32a8bb98e40bfef67da56a68e4eac9e with infra container: default/hello-world-app-7d9564db4-t9m6g/POD" id=094a9bbc-5c23-4039-8532-817dc3d7bdd1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Feb 11 02:06:50 addons-652362 crio[1041]: time="2025-02-11 02:06:50.623362684Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9f7b8249-b013-4419-a46f-d667d22959e2 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:06:50 addons-652362 crio[1041]: time="2025-02-11 02:06:50.623585617Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=9f7b8249-b013-4419-a46f-d667d22959e2 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:06:50 addons-652362 crio[1041]: time="2025-02-11 02:06:50.624099504Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=27d2ae52-9d53-4f5b-ad19-7429a9daa9f8 name=/runtime.v1.ImageService/PullImage
	Feb 11 02:06:50 addons-652362 crio[1041]: time="2025-02-11 02:06:50.640848656Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Feb 11 02:06:51 addons-652362 crio[1041]: time="2025-02-11 02:06:51.112768027Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0bb9486fe88a6       docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da                              2 minutes ago       Running             nginx                     0                   a152008ccb918       nginx
	4f7372dc81455       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   08731ce9e87ea       busybox
	e8fe53f27ac75       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   7905c0468ab81       ingress-nginx-controller-56d7c84fd4-mmnrg
	de023cab53056       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   1e5ca80fd049a       ingress-nginx-admission-patch-h66hv
	7056349f4bf81       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   4452b63a0aeda       ingress-nginx-admission-create-vvqhv
	d4d17dedd8ef9       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             3 minutes ago       Running             minikube-ingress-dns      0                   cfa446f6ae495       kube-ingress-dns-minikube
	6c72590e32d73       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             3 minutes ago       Running             coredns                   0                   c621870b61f7d       coredns-668d6bf9bc-wjjv8
	779e6ad1acdff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   6ccdaf6933df6       storage-provisioner
	f79529c20f207       docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26                           4 minutes ago       Running             kindnet-cni               0                   16eca359eafb5       kindnet-g6pgz
	9bd98afba233d       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             4 minutes ago       Running             kube-proxy                0                   6663f249419b2       kube-proxy-ltsnp
	ec8b289006056       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             4 minutes ago       Running             kube-scheduler            0                   469546f5416a0       kube-scheduler-addons-652362
	ed2b68ebf0960       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             4 minutes ago       Running             kube-controller-manager   0                   d95ec15547b8d       kube-controller-manager-addons-652362
	100e8a619e2c2       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             4 minutes ago       Running             kube-apiserver            0                   8b9d77799ad86       kube-apiserver-addons-652362
	5510e4b7c6a29       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   c2533fb6d3621       etcd-addons-652362
	
	
	==> coredns [6c72590e32d73d17c97b87333053a68a59c75475824f28c902cb470a3f5793b7] <==
	[INFO] 10.244.0.18:54777 - 46069 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000113377s
	[INFO] 10.244.0.18:48006 - 34914 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.003700397s
	[INFO] 10.244.0.18:48006 - 35213 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.003830133s
	[INFO] 10.244.0.18:48285 - 8627 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004636403s
	[INFO] 10.244.0.18:48285 - 8286 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005485825s
	[INFO] 10.244.0.18:52172 - 26022 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004234631s
	[INFO] 10.244.0.18:52172 - 26254 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005638341s
	[INFO] 10.244.0.18:40634 - 11949 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000114051s
	[INFO] 10.244.0.18:40634 - 12256 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000145289s
	[INFO] 10.244.0.21:53417 - 1501 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000201214s
	[INFO] 10.244.0.21:39396 - 26289 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000346503s
	[INFO] 10.244.0.21:34575 - 21847 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000141983s
	[INFO] 10.244.0.21:42056 - 15728 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100158s
	[INFO] 10.244.0.21:45454 - 20926 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123966s
	[INFO] 10.244.0.21:37808 - 28132 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160145s
	[INFO] 10.244.0.21:49504 - 10038 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.004336147s
	[INFO] 10.244.0.21:40822 - 54036 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.004565263s
	[INFO] 10.244.0.21:57762 - 51477 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00518219s
	[INFO] 10.244.0.21:45181 - 8432 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005402331s
	[INFO] 10.244.0.21:36366 - 42712 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005137397s
	[INFO] 10.244.0.21:40006 - 38917 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005524318s
	[INFO] 10.244.0.21:41922 - 58704 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000835947s
	[INFO] 10.244.0.21:33973 - 232 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001004999s
	[INFO] 10.244.0.25:48603 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000173336s
	[INFO] 10.244.0.25:39084 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000176845s
	
	
	==> describe nodes <==
	Name:               addons-652362
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-652362
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e91f70b9b442caa4bec80b031add390ac34d321
	                    minikube.k8s.io/name=addons-652362
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_11T02_02_31_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-652362
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 11 Feb 2025 02:02:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-652362
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 11 Feb 2025 02:06:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 11 Feb 2025 02:05:04 +0000   Tue, 11 Feb 2025 02:02:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 11 Feb 2025 02:05:04 +0000   Tue, 11 Feb 2025 02:02:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 11 Feb 2025 02:05:04 +0000   Tue, 11 Feb 2025 02:02:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 11 Feb 2025 02:05:04 +0000   Tue, 11 Feb 2025 02:02:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-652362
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 2d0cd0440a024700bb52b0ad66365cf8
	  System UUID:                58841fb1-fcbe-42a4-9d20-4cdf2bf2bf76
	  Boot ID:                    144975d8-f0ab-4312-b95d-86c41201d6b3
	  Kernel Version:             5.15.0-1075-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  default                     hello-world-app-7d9564db4-t9m6g              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-mmnrg    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m9s
	  kube-system                 coredns-668d6bf9bc-wjjv8                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m15s
	  kube-system                 etcd-addons-652362                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m20s
	  kube-system                 kindnet-g6pgz                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m15s
	  kube-system                 kube-apiserver-addons-652362                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-controller-manager-addons-652362        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-proxy-ltsnp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-scheduler-addons-652362                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m11s  kube-proxy       
	  Normal   Starting                 4m21s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m21s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m20s  kubelet          Node addons-652362 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m20s  kubelet          Node addons-652362 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m20s  kubelet          Node addons-652362 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m16s  node-controller  Node addons-652362 event: Registered Node addons-652362 in Controller
	  Normal   NodeReady                3m56s  kubelet          Node addons-652362 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000737] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000672] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000620] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000671] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000653] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001282] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.635975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022896] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.252830] kauditd_printk_skb: 46 callbacks suppressed
	[Feb11 02:04] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[  +1.011720] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[  +2.015838] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[  +4.163567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[  +8.187242] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[Feb11 02:05] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[ +33.280901] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	
	
	==> etcd [5510e4b7c6a29bf889c8d7e8f48722007c0fcb4744d6cb0e79bb3b4ed3e44f67] <==
	{"level":"warn","ts":"2025-02-11T02:02:39.022349Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-11T02:02:38.719273Z","time spent":"303.065967ms","remote":"127.0.0.1:43098","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":1,"response size":375,"request content":"key:\"/registry/namespaces/kube-system\" limit:1 "}
	{"level":"info","ts":"2025-02-11T02:02:39.022632Z","caller":"traceutil/trace.go:171","msg":"trace[1364936449] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"286.238376ms","start":"2025-02-11T02:02:38.736378Z","end":"2025-02-11T02:02:39.022617Z","steps":["trace[1364936449] 'process raft request'  (duration: 193.539319ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:02:39.022783Z","caller":"traceutil/trace.go:171","msg":"trace[2088569178] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"286.151921ms","start":"2025-02-11T02:02:38.736621Z","end":"2025-02-11T02:02:39.022773Z","steps":["trace[2088569178] 'process raft request'  (duration: 285.365072ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:02:39.022980Z","caller":"traceutil/trace.go:171","msg":"trace[2027497789] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"286.083779ms","start":"2025-02-11T02:02:38.736887Z","end":"2025-02-11T02:02:39.022970Z","steps":["trace[2027497789] 'process raft request'  (duration: 285.185314ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-11T02:02:39.023184Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.581165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-652362\" limit:1 ","response":"range_response_count:1 size:5655"}
	{"level":"info","ts":"2025-02-11T02:02:39.023212Z","caller":"traceutil/trace.go:171","msg":"trace[1221018515] range","detail":"{range_begin:/registry/minions/addons-652362; range_end:; response_count:1; response_revision:373; }","duration":"103.641222ms","start":"2025-02-11T02:02:38.919562Z","end":"2025-02-11T02:02:39.023204Z","steps":["trace[1221018515] 'agreement among raft nodes before linearized reading'  (duration: 103.537853ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-11T02:02:39.023351Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.534155ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-11T02:02:39.023393Z","caller":"traceutil/trace.go:171","msg":"trace[490144612] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:373; }","duration":"286.577123ms","start":"2025-02-11T02:02:38.736790Z","end":"2025-02-11T02:02:39.023367Z","steps":["trace[490144612] 'agreement among raft nodes before linearized reading'  (duration: 286.535436ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-11T02:02:39.432437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.592304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/minikube-ingress-dns\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-11T02:02:39.432560Z","caller":"traceutil/trace.go:171","msg":"trace[1323088698] range","detail":"{range_begin:/registry/clusterrolebindings/minikube-ingress-dns; range_end:; response_count:0; response_revision:394; }","duration":"106.760223ms","start":"2025-02-11T02:02:39.325785Z","end":"2025-02-11T02:02:39.432545Z","steps":["trace[1323088698] 'agreement among raft nodes before linearized reading'  (duration: 106.461649ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:02:39.432824Z","caller":"traceutil/trace.go:171","msg":"trace[1334173204] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"105.057489ms","start":"2025-02-11T02:02:39.327754Z","end":"2025-02-11T02:02:39.432811Z","steps":["trace[1334173204] 'process raft request'  (duration: 104.262262ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:02:39.433083Z","caller":"traceutil/trace.go:171","msg":"trace[1826041267] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"102.266757ms","start":"2025-02-11T02:02:39.330805Z","end":"2025-02-11T02:02:39.433072Z","steps":["trace[1826041267] 'process raft request'  (duration: 101.294813ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:02:39.433320Z","caller":"traceutil/trace.go:171","msg":"trace[472700656] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"102.352213ms","start":"2025-02-11T02:02:39.330957Z","end":"2025-02-11T02:02:39.433310Z","steps":["trace[472700656] 'process raft request'  (duration: 101.238476ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-11T02:02:39.632792Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.409875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" limit:1 ","response":"range_response_count:1 size:204"}
	{"level":"info","ts":"2025-02-11T02:02:39.632955Z","caller":"traceutil/trace.go:171","msg":"trace[1324398922] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:401; }","duration":"101.575247ms","start":"2025-02-11T02:02:39.531359Z","end":"2025-02-11T02:02:39.632935Z","steps":["trace[1324398922] 'agreement among raft nodes before linearized reading'  (duration: 101.35446ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:02:40.029919Z","caller":"traceutil/trace.go:171","msg":"trace[1743697380] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"103.249787ms","start":"2025-02-11T02:02:39.926643Z","end":"2025-02-11T02:02:40.029893Z","steps":["trace[1743697380] 'process raft request'  (duration: 97.944278ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-11T02:02:40.127555Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.232279ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128035197792806597 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/ranges/serviceips\" mod_revision:389 > success:<request_put:<key:\"/registry/ranges/serviceips\" value_size:122513 >> failure:<request_range:<key:\"/registry/ranges/serviceips\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-02-11T02:02:40.127958Z","caller":"traceutil/trace.go:171","msg":"trace[742169239] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"200.782527ms","start":"2025-02-11T02:02:39.927145Z","end":"2025-02-11T02:02:40.127927Z","steps":["trace[742169239] 'process raft request'  (duration: 97.662464ms)","trace[742169239] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/ranges/serviceips; req_size:122546; } (duration: 100.976157ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-11T02:02:40.128166Z","caller":"traceutil/trace.go:171","msg":"trace[554710930] linearizableReadLoop","detail":"{readStateIndex:429; appliedIndex:428; }","duration":"100.496613ms","start":"2025-02-11T02:02:40.027659Z","end":"2025-02-11T02:02:40.128156Z","steps":["trace[554710930] 'read index received'  (duration: 272.689µs)","trace[554710930] 'applied index is now lower than readState.Index'  (duration: 100.223106ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-11T02:02:40.128399Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.730456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" limit:1 ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-02-11T02:02:40.128471Z","caller":"traceutil/trace.go:171","msg":"trace[1708911158] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:420; }","duration":"100.856716ms","start":"2025-02-11T02:02:40.027606Z","end":"2025-02-11T02:02:40.128463Z","steps":["trace[1708911158] 'agreement among raft nodes before linearized reading'  (duration: 100.729567ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-11T02:02:40.130778Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.619081ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-11T02:02:40.131420Z","caller":"traceutil/trace.go:171","msg":"trace[966619503] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:424; }","duration":"103.280825ms","start":"2025-02-11T02:02:40.028124Z","end":"2025-02-11T02:02:40.131404Z","steps":["trace[966619503] 'agreement among raft nodes before linearized reading'  (duration: 102.561293ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:03:49.828639Z","caller":"traceutil/trace.go:171","msg":"trace[1156450441] transaction","detail":"{read_only:false; response_revision:1165; number_of_response:1; }","duration":"155.899002ms","start":"2025-02-11T02:03:49.672721Z","end":"2025-02-11T02:03:49.828620Z","steps":["trace[1156450441] 'process raft request'  (duration: 155.833244ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:03:49.828692Z","caller":"traceutil/trace.go:171","msg":"trace[1586045849] transaction","detail":"{read_only:false; response_revision:1164; number_of_response:1; }","duration":"158.95052ms","start":"2025-02-11T02:03:49.669723Z","end":"2025-02-11T02:03:49.828674Z","steps":["trace[1586045849] 'process raft request'  (duration: 85.74998ms)","trace[1586045849] 'compare'  (duration: 72.889435ms)"],"step_count":2}
	
	
	==> kernel <==
	 02:06:51 up 49 min,  0 users,  load average: 0.16, 0.44, 0.24
	Linux addons-652362 5.15.0-1075-gcp #84~20.04.1-Ubuntu SMP Thu Jan 16 20:44:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [f79529c20f2070ef507d2bfc9400fdc6c44680d479e3b8bee2df9019b0943b17] <==
	I0211 02:04:44.817203       1 main.go:301] handling current node
	I0211 02:04:54.816890       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:04:54.816944       1 main.go:301] handling current node
	I0211 02:05:04.817531       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:05:04.817567       1 main.go:301] handling current node
	I0211 02:05:14.820227       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:05:14.820267       1 main.go:301] handling current node
	I0211 02:05:24.820186       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:05:24.820238       1 main.go:301] handling current node
	I0211 02:05:34.819963       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:05:34.820002       1 main.go:301] handling current node
	I0211 02:05:44.817224       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:05:44.817528       1 main.go:301] handling current node
	I0211 02:05:54.822601       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:05:54.822652       1 main.go:301] handling current node
	I0211 02:06:04.826689       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:06:04.826721       1 main.go:301] handling current node
	I0211 02:06:14.825812       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:06:14.825855       1 main.go:301] handling current node
	I0211 02:06:24.818195       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:06:24.818238       1 main.go:301] handling current node
	I0211 02:06:34.820172       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:06:34.820210       1 main.go:301] handling current node
	I0211 02:06:44.817127       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:06:44.817168       1 main.go:301] handling current node
	
	
	==> kube-apiserver [100e8a619e2c2b9dd25f429d49204dabca40fbaab83e15bc6d54a41faf539952] <==
	I0211 02:03:21.336561       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0211 02:04:01.291845       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35592: use of closed network connection
	E0211 02:04:01.452509       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35624: use of closed network connection
	I0211 02:04:10.419234       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.255.191"}
	I0211 02:04:22.274407       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0211 02:04:28.362625       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0211 02:04:28.532397       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.131.216"}
	I0211 02:04:32.738330       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0211 02:04:33.755988       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0211 02:04:43.192304       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0211 02:05:03.082087       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0211 02:05:07.494856       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0211 02:05:07.494909       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0211 02:05:07.507872       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0211 02:05:07.508023       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0211 02:05:07.518919       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0211 02:05:07.519074       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0211 02:05:07.529405       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0211 02:05:07.529526       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0211 02:05:07.622891       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0211 02:05:07.622923       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0211 02:05:08.530085       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0211 02:05:08.623141       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0211 02:05:08.645513       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0211 02:06:50.454822       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.170.123"}
	
	
	==> kube-controller-manager [ed2b68ebf09608681027744f5605d151b19a33794c5c534288c4af1fcd368d26] <==
	E0211 02:05:48.384286       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0211 02:05:50.351836       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0211 02:05:50.352708       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0211 02:05:50.353626       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0211 02:05:50.353659       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0211 02:05:53.886453       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0211 02:05:53.887338       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0211 02:05:53.888439       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0211 02:05:53.888477       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0211 02:06:18.329386       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0211 02:06:18.330352       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0211 02:06:18.331271       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0211 02:06:18.331310       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0211 02:06:35.988927       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0211 02:06:35.989825       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0211 02:06:35.990718       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0211 02:06:35.990754       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0211 02:06:41.109849       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0211 02:06:41.110673       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0211 02:06:41.111599       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0211 02:06:41.111630       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0211 02:06:50.248907       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="16.829774ms"
	I0211 02:06:50.252821       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="3.872525ms"
	I0211 02:06:50.252896       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="32.278µs"
	I0211 02:06:50.260330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="105.107µs"
	
	
	==> kube-proxy [9bd98afba233d0838ef12d88779adf1df7a66a6abd2d0573261d5adb749575da] <==
	I0211 02:02:37.622560       1 server_linux.go:66] "Using iptables proxy"
	I0211 02:02:39.231824       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0211 02:02:39.231994       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0211 02:02:40.219465       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0211 02:02:40.219587       1 server_linux.go:170] "Using iptables Proxier"
	I0211 02:02:40.237347       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0211 02:02:40.318023       1 server.go:497] "Version info" version="v1.32.1"
	I0211 02:02:40.318138       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0211 02:02:40.320058       1 config.go:199] "Starting service config controller"
	I0211 02:02:40.320366       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0211 02:02:40.320459       1 config.go:105] "Starting endpoint slice config controller"
	I0211 02:02:40.320489       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0211 02:02:40.321152       1 config.go:329] "Starting node config controller"
	I0211 02:02:40.321282       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0211 02:02:40.421229       1 shared_informer.go:320] Caches are synced for service config
	I0211 02:02:40.421350       1 shared_informer.go:320] Caches are synced for node config
	I0211 02:02:40.421441       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ec8b289006056395189e2bb722d4c0d04bf2ef11971205e26ee5044fbbe0db9d] <==
	E0211 02:02:28.546893       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0211 02:02:28.546907       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0211 02:02:28.546917       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0211 02:02:28.546918       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:28.546791       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0211 02:02:28.546942       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:29.426453       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0211 02:02:29.426498       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:29.445007       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0211 02:02:29.445047       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:29.457391       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0211 02:02:29.457430       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:29.471628       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0211 02:02:29.471667       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:29.513063       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0211 02:02:29.513095       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:29.582447       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0211 02:02:29.582486       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:29.627913       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0211 02:02:29.627953       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0211 02:02:29.664391       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0211 02:02:29.664436       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:29.689657       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0211 02:02:29.689701       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0211 02:02:32.544378       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 11 02:06:31 addons-652362 kubelet[1646]: E0211 02:06:31.045193    1646 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/be0442f286a4f570ba85f81e8d6427240973c07f17eec0949030c4d341f2fa04/diff" to get inode usage: stat /var/lib/containers/storage/overlay/be0442f286a4f570ba85f81e8d6427240973c07f17eec0949030c4d341f2fa04/diff: no such file or directory, extraDiskErr: <nil>
	Feb 11 02:06:31 addons-652362 kubelet[1646]: E0211 02:06:31.050244    1646 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e0367111b18869ac82a835d884d5859bae09c0a9693c3430de11df6084b8bc62/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e0367111b18869ac82a835d884d5859bae09c0a9693c3430de11df6084b8bc62/diff: no such file or directory, extraDiskErr: <nil>
	Feb 11 02:06:31 addons-652362 kubelet[1646]: E0211 02:06:31.050600    1646 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/145e8f627471e81f8a945398a2bc80dcb75d7a67cac75d8699bfdfe46ad55125/diff" to get inode usage: stat /var/lib/containers/storage/overlay/145e8f627471e81f8a945398a2bc80dcb75d7a67cac75d8699bfdfe46ad55125/diff: no such file or directory, extraDiskErr: <nil>
	Feb 11 02:06:31 addons-652362 kubelet[1646]: E0211 02:06:31.057377    1646 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3756861406fc0bcc81bf634989778f74c9ab8a868c981a5c2884495784c349d0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3756861406fc0bcc81bf634989778f74c9ab8a868c981a5c2884495784c349d0/diff: no such file or directory, extraDiskErr: <nil>
	Feb 11 02:06:31 addons-652362 kubelet[1646]: E0211 02:06:31.067144    1646 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239591066925195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617310,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:06:31 addons-652362 kubelet[1646]: E0211 02:06:31.067178    1646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239591066925195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617310,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:06:41 addons-652362 kubelet[1646]: E0211 02:06:41.069481    1646 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239601069243769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617310,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:06:41 addons-652362 kubelet[1646]: E0211 02:06:41.069514    1646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239601069243769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617310,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:06:42 addons-652362 kubelet[1646]: I0211 02:06:42.921575    1646 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Feb 11 02:06:50 addons-652362 kubelet[1646]: I0211 02:06:50.249159    1646 memory_manager.go:355] "RemoveStaleState removing state" podUID="b5849d99-a135-4cff-805c-75b535210469" containerName="volume-snapshot-controller"
	Feb 11 02:06:50 addons-652362 kubelet[1646]: I0211 02:06:50.249201    1646 memory_manager.go:355] "RemoveStaleState removing state" podUID="9bf14825-efbd-420e-9ca9-4409aab92d42" containerName="liveness-probe"
	Feb 11 02:06:50 addons-652362 kubelet[1646]: I0211 02:06:50.249209    1646 memory_manager.go:355] "RemoveStaleState removing state" podUID="33f52f8c-ec15-49b7-8567-15ccadbaaba6" containerName="volume-snapshot-controller"
	Feb 11 02:06:50 addons-652362 kubelet[1646]: I0211 02:06:50.249218    1646 memory_manager.go:355] "RemoveStaleState removing state" podUID="4fff01fd-b1a7-409f-a26f-6d274eef5cf4" containerName="csi-resizer"
	Feb 11 02:06:50 addons-652362 kubelet[1646]: I0211 02:06:50.249226    1646 memory_manager.go:355] "RemoveStaleState removing state" podUID="b09bc728-d002-4e4a-8592-add5d40aff10" containerName="local-path-provisioner"
	Feb 11 02:06:50 addons-652362 kubelet[1646]: I0211 02:06:50.249234    1646 memory_manager.go:355] "RemoveStaleState removing state" podUID="9bf14825-efbd-420e-9ca9-4409aab92d42" containerName="csi-provisioner"
	Feb 11 02:06:50 addons-652362 kubelet[1646]: I0211 02:06:50.249242    1646 memory_manager.go:355] "RemoveStaleState removing state" podUID="98bf4ed1-c584-42be-8d0a-f8d3e3e3d6d5" containerName="csi-attacher"
	Feb 11 02:06:50 addons-652362 kubelet[1646]: I0211 02:06:50.249250    1646 memory_manager.go:355] "RemoveStaleState removing state" podUID="9bf14825-efbd-420e-9ca9-4409aab92d42" containerName="csi-external-health-monitor-controller"
	Feb 11 02:06:50 addons-652362 kubelet[1646]: I0211 02:06:50.249260    1646 memory_manager.go:355] "RemoveStaleState removing state" podUID="9bf14825-efbd-420e-9ca9-4409aab92d42" containerName="csi-snapshotter"
	Feb 11 02:06:50 addons-652362 kubelet[1646]: I0211 02:06:50.249267    1646 memory_manager.go:355] "RemoveStaleState removing state" podUID="9bf14825-efbd-420e-9ca9-4409aab92d42" containerName="node-driver-registrar"
	Feb 11 02:06:50 addons-652362 kubelet[1646]: I0211 02:06:50.249276    1646 memory_manager.go:355] "RemoveStaleState removing state" podUID="9bf14825-efbd-420e-9ca9-4409aab92d42" containerName="hostpath"
	Feb 11 02:06:50 addons-652362 kubelet[1646]: I0211 02:06:50.249284    1646 memory_manager.go:355] "RemoveStaleState removing state" podUID="0078b93d-527c-412a-b037-c1e45c00e941" containerName="task-pv-container"
	Feb 11 02:06:50 addons-652362 kubelet[1646]: I0211 02:06:50.275318    1646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tq4k\" (UniqueName: \"kubernetes.io/projected/07ec65ac-1cdc-4046-a73f-8bbf8cb983a9-kube-api-access-5tq4k\") pod \"hello-world-app-7d9564db4-t9m6g\" (UID: \"07ec65ac-1cdc-4046-a73f-8bbf8cb983a9\") " pod="default/hello-world-app-7d9564db4-t9m6g"
	Feb 11 02:06:50 addons-652362 kubelet[1646]: W0211 02:06:50.621531    1646 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9dc8e143cb81ab6db778b64e0489107efadf6e3c219d67a22e1543b28752f000/crio-8d7a43333b56fd732f3b9de5906d1eecf32a8bb98e40bfef67da56a68e4eac9e WatchSource:0}: Error finding container 8d7a43333b56fd732f3b9de5906d1eecf32a8bb98e40bfef67da56a68e4eac9e: Status 404 returned error can't find the container with id 8d7a43333b56fd732f3b9de5906d1eecf32a8bb98e40bfef67da56a68e4eac9e
	Feb 11 02:06:51 addons-652362 kubelet[1646]: E0211 02:06:51.072338    1646 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239611072063414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617310,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:06:51 addons-652362 kubelet[1646]: E0211 02:06:51.072381    1646 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239611072063414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617310,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [779e6ad1acdffd0ff037d0cd7bc93546e74af36164bf49812bda8552d58f8a9d] <==
	I0211 02:02:55.955480       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0211 02:02:56.019672       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0211 02:02:56.019707       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0211 02:02:56.026330       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0211 02:02:56.026392       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f10604da-4fee-428d-8347-eb5c94f4d6e8", APIVersion:"v1", ResourceVersion:"886", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-652362_d8731165-2763-487f-af32-d8b7a21e0c82 became leader
	I0211 02:02:56.026572       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-652362_d8731165-2763-487f-af32-d8b7a21e0c82!
	I0211 02:02:56.127399       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-652362_d8731165-2763-487f-af32-d8b7a21e0c82!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-652362 -n addons-652362
helpers_test.go:261: (dbg) Run:  kubectl --context addons-652362 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-t9m6g ingress-nginx-admission-create-vvqhv ingress-nginx-admission-patch-h66hv
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-652362 describe pod hello-world-app-7d9564db4-t9m6g ingress-nginx-admission-create-vvqhv ingress-nginx-admission-patch-h66hv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-652362 describe pod hello-world-app-7d9564db4-t9m6g ingress-nginx-admission-create-vvqhv ingress-nginx-admission-patch-h66hv: exit status 1 (63.718084ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-t9m6g
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-652362/192.168.49.2
	Start Time:       Tue, 11 Feb 2025 02:06:50 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5tq4k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5tq4k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-t9m6g to addons-652362
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     0s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.517s (1.517s including waiting). Image size: 4944818 bytes.
	  Normal  Created    0s    kubelet            Created container: hello-world-app
	  Normal  Started    0s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-vvqhv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h66hv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-652362 describe pod hello-world-app-7d9564db4-t9m6g ingress-nginx-admission-create-vvqhv ingress-nginx-admission-patch-h66hv: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-652362 addons disable ingress-dns --alsologtostderr -v=1: (1.263042947s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-652362 addons disable ingress --alsologtostderr -v=1: (7.61814231s)
--- FAIL: TestAddons/parallel/Ingress (153.50s)

                                                
                                    
x
+
TestForceSystemdEnv (23.9s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-794734 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p force-systemd-env-794734 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (21.080370161s)

                                                
                                                
-- stdout --
	* [force-systemd-env-794734] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-794734" primary control-plane node in "force-systemd-env-794734" cluster
	* Pulling base image v0.0.46 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:45:20.409927  242970 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:45:20.410075  242970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:45:20.410086  242970 out.go:358] Setting ErrFile to fd 2...
	I0211 02:45:20.410093  242970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:45:20.410319  242970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	I0211 02:45:20.410889  242970 out.go:352] Setting JSON to false
	I0211 02:45:20.411939  242970 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5269,"bootTime":1739236651,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:45:20.412045  242970 start.go:139] virtualization: kvm guest
	I0211 02:45:20.414759  242970 out.go:177] * [force-systemd-env-794734] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 02:45:20.416310  242970 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 02:45:20.416379  242970 notify.go:220] Checking for updates...
	I0211 02:45:20.419492  242970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:45:20.421013  242970 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	I0211 02:45:20.422433  242970 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	I0211 02:45:20.423820  242970 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 02:45:20.425299  242970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0211 02:45:20.427143  242970 config.go:182] Loaded profile config "NoKubernetes-050042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0211 02:45:20.427243  242970 config.go:182] Loaded profile config "kubernetes-upgrade-504968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:45:20.427334  242970 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:45:20.450761  242970 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0211 02:45:20.450879  242970 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:45:20.497648  242970 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:64 SystemTime:2025-02-11 02:45:20.488770868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:45:20.497834  242970 docker.go:318] overlay module found
	I0211 02:45:20.499858  242970 out.go:177] * Using the docker driver based on user configuration
	I0211 02:45:20.501134  242970 start.go:297] selected driver: docker
	I0211 02:45:20.501146  242970 start.go:901] validating driver "docker" against <nil>
	I0211 02:45:20.501157  242970 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 02:45:20.501993  242970 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:45:20.549533  242970 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:64 SystemTime:2025-02-11 02:45:20.539381405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:45:20.549695  242970 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0211 02:45:20.549933  242970 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0211 02:45:20.551872  242970 out.go:177] * Using Docker driver with root privileges
	I0211 02:45:20.553219  242970 cni.go:84] Creating CNI manager for ""
	I0211 02:45:20.553281  242970 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0211 02:45:20.553293  242970 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0211 02:45:20.553384  242970 start.go:340] cluster config:
	{Name:force-systemd-env-794734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:force-systemd-env-794734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:45:20.554784  242970 out.go:177] * Starting "force-systemd-env-794734" primary control-plane node in "force-systemd-env-794734" cluster
	I0211 02:45:20.556199  242970 cache.go:121] Beginning downloading kic base image for docker with crio
	I0211 02:45:20.557555  242970 out.go:177] * Pulling base image v0.0.46 ...
	I0211 02:45:20.558763  242970 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 02:45:20.558801  242970 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0211 02:45:20.558807  242970 cache.go:56] Caching tarball of preloaded images
	I0211 02:45:20.558866  242970 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0211 02:45:20.558894  242970 preload.go:172] Found /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0211 02:45:20.558905  242970 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0211 02:45:20.559030  242970 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/config.json ...
	I0211 02:45:20.559058  242970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/config.json: {Name:mkbcbeb3687bf72d9779365c504715a727e7911b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:45:20.579036  242970 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0211 02:45:20.579058  242970 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0211 02:45:20.579078  242970 cache.go:230] Successfully downloaded all kic artifacts
	I0211 02:45:20.579117  242970 start.go:360] acquireMachinesLock for force-systemd-env-794734: {Name:mk286e56da4c708404486ea8ab4ad14b7320ddfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 02:45:20.579223  242970 start.go:364] duration metric: took 85.546µs to acquireMachinesLock for "force-systemd-env-794734"
	I0211 02:45:20.579251  242970 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-794734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:force-systemd-env-794734 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 02:45:20.579362  242970 start.go:125] createHost starting for "" (driver="docker")
	I0211 02:45:20.581463  242970 out.go:235] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0211 02:45:20.581687  242970 start.go:159] libmachine.API.Create for "force-systemd-env-794734" (driver="docker")
	I0211 02:45:20.581725  242970 client.go:168] LocalClient.Create starting
	I0211 02:45:20.581788  242970 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem
	I0211 02:45:20.581827  242970 main.go:141] libmachine: Decoding PEM data...
	I0211 02:45:20.581849  242970 main.go:141] libmachine: Parsing certificate...
	I0211 02:45:20.581916  242970 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20400-12240/.minikube/certs/cert.pem
	I0211 02:45:20.581951  242970 main.go:141] libmachine: Decoding PEM data...
	I0211 02:45:20.581989  242970 main.go:141] libmachine: Parsing certificate...
	I0211 02:45:20.582321  242970 cli_runner.go:164] Run: docker network inspect force-systemd-env-794734 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0211 02:45:20.598909  242970 cli_runner.go:211] docker network inspect force-systemd-env-794734 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0211 02:45:20.598984  242970 network_create.go:284] running [docker network inspect force-systemd-env-794734] to gather additional debugging logs...
	I0211 02:45:20.599002  242970 cli_runner.go:164] Run: docker network inspect force-systemd-env-794734
	W0211 02:45:20.615034  242970 cli_runner.go:211] docker network inspect force-systemd-env-794734 returned with exit code 1
	I0211 02:45:20.615060  242970 network_create.go:287] error running [docker network inspect force-systemd-env-794734]: docker network inspect force-systemd-env-794734: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-794734 not found
	I0211 02:45:20.615071  242970 network_create.go:289] output of [docker network inspect force-systemd-env-794734]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-794734 not found
	
	** /stderr **
	I0211 02:45:20.615184  242970 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0211 02:45:20.632548  242970 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-370a375e9ac7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:3f:cb:fd:76} reservation:<nil>}
	I0211 02:45:20.633346  242970 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-031d12f65b31 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:cb:84:c5:a8} reservation:<nil>}
	I0211 02:45:20.634083  242970 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-314624e1e4fb IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d3:cf:64:79} reservation:<nil>}
	I0211 02:45:20.634856  242970 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ed283f220425 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:8f:f4:d8:95} reservation:<nil>}
	I0211 02:45:20.635500  242970 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-fb0b6dad4e6c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:cf:18:98:06} reservation:<nil>}
	I0211 02:45:20.636337  242970 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f17760}
	I0211 02:45:20.636366  242970 network_create.go:124] attempt to create docker network force-systemd-env-794734 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0211 02:45:20.636430  242970 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-794734 force-systemd-env-794734
	I0211 02:45:20.700809  242970 network_create.go:108] docker network force-systemd-env-794734 192.168.94.0/24 created
	I0211 02:45:20.700837  242970 kic.go:121] calculated static IP "192.168.94.2" for the "force-systemd-env-794734" container
	I0211 02:45:20.700887  242970 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0211 02:45:20.718775  242970 cli_runner.go:164] Run: docker volume create force-systemd-env-794734 --label name.minikube.sigs.k8s.io=force-systemd-env-794734 --label created_by.minikube.sigs.k8s.io=true
	I0211 02:45:20.738239  242970 oci.go:103] Successfully created a docker volume force-systemd-env-794734
	I0211 02:45:20.738318  242970 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-794734-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-794734 --entrypoint /usr/bin/test -v force-systemd-env-794734:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0211 02:45:21.279581  242970 oci.go:107] Successfully prepared a docker volume force-systemd-env-794734
	I0211 02:45:21.279638  242970 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 02:45:21.279666  242970 kic.go:194] Starting extracting preloaded images to volume ...
	I0211 02:45:21.279767  242970 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-794734:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0211 02:45:25.882188  242970 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-794734:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.602381397s)
	I0211 02:45:25.882229  242970 kic.go:203] duration metric: took 4.602560353s to extract preloaded images to volume ...
	W0211 02:45:25.882380  242970 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0211 02:45:25.882501  242970 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0211 02:45:25.928597  242970 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-794734 --name force-systemd-env-794734 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-794734 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-794734 --network force-systemd-env-794734 --ip 192.168.94.2 --volume force-systemd-env-794734:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0211 02:45:26.230480  242970 cli_runner.go:164] Run: docker container inspect force-systemd-env-794734 --format={{.State.Running}}
	I0211 02:45:26.249176  242970 cli_runner.go:164] Run: docker container inspect force-systemd-env-794734 --format={{.State.Status}}
	I0211 02:45:26.268164  242970 cli_runner.go:164] Run: docker exec force-systemd-env-794734 stat /var/lib/dpkg/alternatives/iptables
	I0211 02:45:26.307782  242970 oci.go:144] the created container "force-systemd-env-794734" has a running status.
	I0211 02:45:26.307814  242970 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20400-12240/.minikube/machines/force-systemd-env-794734/id_rsa...
	I0211 02:45:26.764172  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/machines/force-systemd-env-794734/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0211 02:45:26.764237  242970 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20400-12240/.minikube/machines/force-systemd-env-794734/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0211 02:45:26.785446  242970 cli_runner.go:164] Run: docker container inspect force-systemd-env-794734 --format={{.State.Status}}
	I0211 02:45:26.802863  242970 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0211 02:45:26.802886  242970 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-794734 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0211 02:45:26.845407  242970 cli_runner.go:164] Run: docker container inspect force-systemd-env-794734 --format={{.State.Status}}
	I0211 02:45:26.865728  242970 machine.go:93] provisionDockerMachine start ...
	I0211 02:45:26.865810  242970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-794734
	I0211 02:45:26.884209  242970 main.go:141] libmachine: Using SSH client type: native
	I0211 02:45:26.884409  242970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 33029 <nil> <nil>}
	I0211 02:45:26.884425  242970 main.go:141] libmachine: About to run SSH command:
	hostname
	I0211 02:45:27.011618  242970 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-794734
	
	I0211 02:45:27.011648  242970 ubuntu.go:169] provisioning hostname "force-systemd-env-794734"
	I0211 02:45:27.011704  242970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-794734
	I0211 02:45:27.028977  242970 main.go:141] libmachine: Using SSH client type: native
	I0211 02:45:27.029159  242970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 33029 <nil> <nil>}
	I0211 02:45:27.029173  242970 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-794734 && echo "force-systemd-env-794734" | sudo tee /etc/hostname
	I0211 02:45:27.166811  242970 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-794734
	
	I0211 02:45:27.166896  242970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-794734
	I0211 02:45:27.184455  242970 main.go:141] libmachine: Using SSH client type: native
	I0211 02:45:27.184653  242970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 33029 <nil> <nil>}
	I0211 02:45:27.184676  242970 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-794734' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-794734/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-794734' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0211 02:45:27.312160  242970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 02:45:27.312192  242970 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20400-12240/.minikube CaCertPath:/home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20400-12240/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20400-12240/.minikube}
	I0211 02:45:27.312215  242970 ubuntu.go:177] setting up certificates
	I0211 02:45:27.312228  242970 provision.go:84] configureAuth start
	I0211 02:45:27.312278  242970 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-794734
	I0211 02:45:27.329128  242970 provision.go:143] copyHostCerts
	I0211 02:45:27.329163  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20400-12240/.minikube/ca.pem
	I0211 02:45:27.329193  242970 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12240/.minikube/ca.pem, removing ...
	I0211 02:45:27.329202  242970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.pem
	I0211 02:45:27.329264  242970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20400-12240/.minikube/ca.pem (1078 bytes)
	I0211 02:45:27.329343  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20400-12240/.minikube/cert.pem
	I0211 02:45:27.329361  242970 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12240/.minikube/cert.pem, removing ...
	I0211 02:45:27.329367  242970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12240/.minikube/cert.pem
	I0211 02:45:27.329389  242970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20400-12240/.minikube/cert.pem (1123 bytes)
	I0211 02:45:27.329446  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20400-12240/.minikube/key.pem
	I0211 02:45:27.329462  242970 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12240/.minikube/key.pem, removing ...
	I0211 02:45:27.329468  242970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12240/.minikube/key.pem
	I0211 02:45:27.329491  242970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20400-12240/.minikube/key.pem (1675 bytes)
	I0211 02:45:27.329549  242970 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20400-12240/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-794734 san=[127.0.0.1 192.168.94.2 force-systemd-env-794734 localhost minikube]
	I0211 02:45:27.439888  242970 provision.go:177] copyRemoteCerts
	I0211 02:45:27.439948  242970 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0211 02:45:27.439983  242970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-794734
	I0211 02:45:27.457699  242970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33029 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/force-systemd-env-794734/id_rsa Username:docker}
	I0211 02:45:27.548861  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0211 02:45:27.548916  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0211 02:45:27.571243  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0211 02:45:27.571296  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0211 02:45:27.593362  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0211 02:45:27.593433  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0211 02:45:27.615072  242970 provision.go:87] duration metric: took 302.829783ms to configureAuth
	I0211 02:45:27.615168  242970 ubuntu.go:193] setting minikube options for container-runtime
	I0211 02:45:27.615381  242970 config.go:182] Loaded profile config "force-systemd-env-794734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:45:27.615483  242970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-794734
	I0211 02:45:27.633686  242970 main.go:141] libmachine: Using SSH client type: native
	I0211 02:45:27.633864  242970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 33029 <nil> <nil>}
	I0211 02:45:27.633883  242970 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0211 02:45:27.846519  242970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0211 02:45:27.846543  242970 machine.go:96] duration metric: took 980.794775ms to provisionDockerMachine
	I0211 02:45:27.846553  242970 client.go:171] duration metric: took 7.264820556s to LocalClient.Create
	I0211 02:45:27.846573  242970 start.go:167] duration metric: took 7.26488722s to libmachine.API.Create "force-systemd-env-794734"
	I0211 02:45:27.846587  242970 start.go:293] postStartSetup for "force-systemd-env-794734" (driver="docker")
	I0211 02:45:27.846598  242970 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0211 02:45:27.846654  242970 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0211 02:45:27.846689  242970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-794734
	I0211 02:45:27.864373  242970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33029 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/force-systemd-env-794734/id_rsa Username:docker}
	I0211 02:45:27.961159  242970 ssh_runner.go:195] Run: cat /etc/os-release
	I0211 02:45:27.964336  242970 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0211 02:45:27.964386  242970 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0211 02:45:27.964397  242970 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0211 02:45:27.964403  242970 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0211 02:45:27.964414  242970 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12240/.minikube/addons for local assets ...
	I0211 02:45:27.964480  242970 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12240/.minikube/files for local assets ...
	I0211 02:45:27.964561  242970 filesync.go:149] local asset: /home/jenkins/minikube-integration/20400-12240/.minikube/files/etc/ssl/certs/190282.pem -> 190282.pem in /etc/ssl/certs
	I0211 02:45:27.964574  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/files/etc/ssl/certs/190282.pem -> /etc/ssl/certs/190282.pem
	I0211 02:45:27.964668  242970 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0211 02:45:27.972712  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/files/etc/ssl/certs/190282.pem --> /etc/ssl/certs/190282.pem (1708 bytes)
	I0211 02:45:27.994559  242970 start.go:296] duration metric: took 147.95803ms for postStartSetup
	I0211 02:45:27.994899  242970 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-794734
	I0211 02:45:28.012126  242970 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/config.json ...
	I0211 02:45:28.012384  242970 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0211 02:45:28.012427  242970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-794734
	I0211 02:45:28.030134  242970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33029 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/force-systemd-env-794734/id_rsa Username:docker}
	I0211 02:45:28.116570  242970 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0211 02:45:28.120736  242970 start.go:128] duration metric: took 7.541357786s to createHost
	I0211 02:45:28.120770  242970 start.go:83] releasing machines lock for "force-systemd-env-794734", held for 7.541535248s
	I0211 02:45:28.120839  242970 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-794734
	I0211 02:45:28.140058  242970 ssh_runner.go:195] Run: cat /version.json
	I0211 02:45:28.140100  242970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-794734
	I0211 02:45:28.140154  242970 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0211 02:45:28.140209  242970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-794734
	I0211 02:45:28.159072  242970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33029 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/force-systemd-env-794734/id_rsa Username:docker}
	I0211 02:45:28.161833  242970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33029 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/force-systemd-env-794734/id_rsa Username:docker}
	I0211 02:45:28.251575  242970 ssh_runner.go:195] Run: systemctl --version
	I0211 02:45:28.328495  242970 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0211 02:45:28.469576  242970 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0211 02:45:28.473793  242970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0211 02:45:28.491461  242970 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0211 02:45:28.491534  242970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0211 02:45:28.517582  242970 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0211 02:45:28.517602  242970 start.go:495] detecting cgroup driver to use...
	I0211 02:45:28.517617  242970 start.go:499] using "systemd" cgroup driver as enforced via flags
	I0211 02:45:28.517657  242970 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0211 02:45:28.531694  242970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0211 02:45:28.541878  242970 docker.go:217] disabling cri-docker service (if available) ...
	I0211 02:45:28.541932  242970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0211 02:45:28.555400  242970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0211 02:45:28.569200  242970 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0211 02:45:28.639592  242970 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0211 02:45:28.719407  242970 docker.go:233] disabling docker service ...
	I0211 02:45:28.719467  242970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0211 02:45:28.737088  242970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0211 02:45:28.748455  242970 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0211 02:45:28.815508  242970 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0211 02:45:28.888601  242970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0211 02:45:28.898961  242970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0211 02:45:28.913522  242970 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0211 02:45:28.913592  242970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:45:28.922505  242970 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0211 02:45:28.922554  242970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:45:28.932014  242970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:45:28.941601  242970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:45:28.951182  242970 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0211 02:45:28.959917  242970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:45:28.969193  242970 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:45:28.984295  242970 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:45:28.993396  242970 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0211 02:45:29.001230  242970 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0211 02:45:29.008774  242970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 02:45:29.082894  242970 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0211 02:45:29.186290  242970 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0211 02:45:29.186359  242970 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0211 02:45:29.190336  242970 start.go:563] Will wait 60s for crictl version
	I0211 02:45:29.190390  242970 ssh_runner.go:195] Run: which crictl
	I0211 02:45:29.193769  242970 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0211 02:45:29.226323  242970 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0211 02:45:29.226397  242970 ssh_runner.go:195] Run: crio --version
	I0211 02:45:29.259887  242970 ssh_runner.go:195] Run: crio --version
	I0211 02:45:29.295060  242970 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	I0211 02:45:29.296404  242970 cli_runner.go:164] Run: docker network inspect force-systemd-env-794734 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0211 02:45:29.312866  242970 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0211 02:45:29.316562  242970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 02:45:29.326854  242970 kubeadm.go:883] updating cluster {Name:force-systemd-env-794734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:force-systemd-env-794734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0211 02:45:29.326964  242970 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 02:45:29.327034  242970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 02:45:29.392453  242970 crio.go:514] all images are preloaded for cri-o runtime.
	I0211 02:45:29.392473  242970 crio.go:433] Images already preloaded, skipping extraction
	I0211 02:45:29.392513  242970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 02:45:29.424612  242970 crio.go:514] all images are preloaded for cri-o runtime.
	I0211 02:45:29.424635  242970 cache_images.go:84] Images are preloaded, skipping loading
	I0211 02:45:29.424642  242970 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.1 crio true true} ...
	I0211 02:45:29.424731  242970 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-794734 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:force-systemd-env-794734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0211 02:45:29.424830  242970 ssh_runner.go:195] Run: crio config
	I0211 02:45:29.466797  242970 cni.go:84] Creating CNI manager for ""
	I0211 02:45:29.466820  242970 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0211 02:45:29.466829  242970 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0211 02:45:29.466849  242970 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-794734 NodeName:force-systemd-env-794734 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0211 02:45:29.466978  242970 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-794734"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0211 02:45:29.467034  242970 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0211 02:45:29.475397  242970 binaries.go:44] Found k8s binaries, skipping transfer
	I0211 02:45:29.475471  242970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0211 02:45:29.483474  242970 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0211 02:45:29.500141  242970 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0211 02:45:29.516615  242970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0211 02:45:29.533005  242970 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0211 02:45:29.536295  242970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 02:45:29.546756  242970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 02:45:29.612452  242970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 02:45:29.624938  242970 certs.go:68] Setting up /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734 for IP: 192.168.94.2
	I0211 02:45:29.624965  242970 certs.go:194] generating shared ca certs ...
	I0211 02:45:29.624984  242970 certs.go:226] acquiring lock for ca certs: {Name:mk01247a5e2f34c4793d43faa12fab98d68353d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:45:29.625132  242970 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.key
	I0211 02:45:29.625174  242970 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.key
	I0211 02:45:29.625184  242970 certs.go:256] generating profile certs ...
	I0211 02:45:29.625235  242970 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/client.key
	I0211 02:45:29.625247  242970 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/client.crt with IP's: []
	I0211 02:45:29.857592  242970 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/client.crt ...
	I0211 02:45:29.857676  242970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/client.crt: {Name:mkcd511a655ab0fb5cd12ac95087ecff746ab808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:45:29.857889  242970 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/client.key ...
	I0211 02:45:29.857907  242970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/client.key: {Name:mk690f4968e67cfa0bf58d15880b7766482faea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:45:29.858017  242970 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/apiserver.key.023b9b8d
	I0211 02:45:29.858036  242970 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/apiserver.crt.023b9b8d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0211 02:45:30.289482  242970 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/apiserver.crt.023b9b8d ...
	I0211 02:45:30.289524  242970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/apiserver.crt.023b9b8d: {Name:mkdbfeead6b0d8155a9aef65d05e688dbcb73d1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:45:30.289740  242970 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/apiserver.key.023b9b8d ...
	I0211 02:45:30.289762  242970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/apiserver.key.023b9b8d: {Name:mkeb76dbf852dd33cb30323ba7bbe8a8826aa976 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:45:30.289884  242970 certs.go:381] copying /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/apiserver.crt.023b9b8d -> /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/apiserver.crt
	I0211 02:45:30.289985  242970 certs.go:385] copying /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/apiserver.key.023b9b8d -> /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/apiserver.key
	I0211 02:45:30.290072  242970 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/proxy-client.key
	I0211 02:45:30.290096  242970 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/proxy-client.crt with IP's: []
	I0211 02:45:30.510960  242970 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/proxy-client.crt ...
	I0211 02:45:30.510991  242970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/proxy-client.crt: {Name:mkfa933f0b5081e290cf124e6269bbae5ab8ef0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:45:30.511156  242970 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/proxy-client.key ...
	I0211 02:45:30.511169  242970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/proxy-client.key: {Name:mk5a0e2d7f559cccb70bc12a9c2afa3ec4ac8a36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:45:30.511239  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0211 02:45:30.511259  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0211 02:45:30.511269  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0211 02:45:30.511285  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0211 02:45:30.511295  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0211 02:45:30.511304  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0211 02:45:30.511315  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0211 02:45:30.511327  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0211 02:45:30.511383  242970 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/19028.pem (1338 bytes)
	W0211 02:45:30.511416  242970 certs.go:480] ignoring /home/jenkins/minikube-integration/20400-12240/.minikube/certs/19028_empty.pem, impossibly tiny 0 bytes
	I0211 02:45:30.511426  242970 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca-key.pem (1679 bytes)
	I0211 02:45:30.511450  242970 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem (1078 bytes)
	I0211 02:45:30.511471  242970 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/cert.pem (1123 bytes)
	I0211 02:45:30.511495  242970 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/key.pem (1675 bytes)
	I0211 02:45:30.511534  242970 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/files/etc/ssl/certs/190282.pem (1708 bytes)
	I0211 02:45:30.511559  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/files/etc/ssl/certs/190282.pem -> /usr/share/ca-certificates/190282.pem
	I0211 02:45:30.511574  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:45:30.511586  242970 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/19028.pem -> /usr/share/ca-certificates/19028.pem
	I0211 02:45:30.512139  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0211 02:45:30.534693  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0211 02:45:30.556831  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0211 02:45:30.579062  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0211 02:45:30.600787  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0211 02:45:30.621931  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0211 02:45:30.643688  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0211 02:45:30.665710  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/force-systemd-env-794734/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0211 02:45:30.687276  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/files/etc/ssl/certs/190282.pem --> /usr/share/ca-certificates/190282.pem (1708 bytes)
	I0211 02:45:30.709007  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0211 02:45:30.730893  242970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/certs/19028.pem --> /usr/share/ca-certificates/19028.pem (1338 bytes)
	I0211 02:45:30.753669  242970 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0211 02:45:30.770142  242970 ssh_runner.go:195] Run: openssl version
	I0211 02:45:30.775012  242970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19028.pem && ln -fs /usr/share/ca-certificates/19028.pem /etc/ssl/certs/19028.pem"
	I0211 02:45:30.783363  242970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19028.pem
	I0211 02:45:30.786472  242970 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 11 02:07 /usr/share/ca-certificates/19028.pem
	I0211 02:45:30.786518  242970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19028.pem
	I0211 02:45:30.792806  242970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19028.pem /etc/ssl/certs/51391683.0"
	I0211 02:45:30.801258  242970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/190282.pem && ln -fs /usr/share/ca-certificates/190282.pem /etc/ssl/certs/190282.pem"
	I0211 02:45:30.809882  242970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/190282.pem
	I0211 02:45:30.813038  242970 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 11 02:07 /usr/share/ca-certificates/190282.pem
	I0211 02:45:30.813086  242970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/190282.pem
	I0211 02:45:30.819215  242970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/190282.pem /etc/ssl/certs/3ec20f2e.0"
	I0211 02:45:30.827536  242970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0211 02:45:30.835791  242970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:45:30.838974  242970 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 11 02:02 /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:45:30.839020  242970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:45:30.845432  242970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0211 02:45:30.853969  242970 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0211 02:45:30.857325  242970 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0211 02:45:30.857379  242970 kubeadm.go:392] StartCluster: {Name:force-systemd-env-794734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:force-systemd-env-794734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:45:30.857459  242970 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0211 02:45:30.857497  242970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 02:45:30.890024  242970 cri.go:89] found id: ""
	I0211 02:45:30.890086  242970 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0211 02:45:30.898209  242970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0211 02:45:30.906434  242970 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0211 02:45:30.906489  242970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 02:45:30.914426  242970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 02:45:30.914445  242970 kubeadm.go:157] found existing configuration files:
	
	I0211 02:45:30.914486  242970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 02:45:30.922809  242970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 02:45:30.922863  242970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 02:45:30.930588  242970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 02:45:30.938386  242970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 02:45:30.938449  242970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 02:45:30.946159  242970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 02:45:30.954260  242970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 02:45:30.954322  242970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 02:45:30.962373  242970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 02:45:30.970366  242970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 02:45:30.970412  242970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 02:45:30.977888  242970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0211 02:45:31.015861  242970 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0211 02:45:31.015962  242970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 02:45:31.035305  242970 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0211 02:45:31.035465  242970 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-gcp
	I0211 02:45:31.035541  242970 kubeadm.go:310] OS: Linux
	I0211 02:45:31.035625  242970 kubeadm.go:310] CGROUPS_CPU: enabled
	I0211 02:45:31.035707  242970 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0211 02:45:31.035790  242970 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0211 02:45:31.035873  242970 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0211 02:45:31.035959  242970 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0211 02:45:31.036068  242970 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0211 02:45:31.036187  242970 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0211 02:45:31.036287  242970 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0211 02:45:31.036404  242970 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0211 02:45:31.094956  242970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 02:45:31.095121  242970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 02:45:31.095260  242970 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0211 02:45:31.101510  242970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 02:45:31.104248  242970 out.go:235]   - Generating certificates and keys ...
	I0211 02:45:31.104352  242970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 02:45:31.104441  242970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 02:45:31.262673  242970 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0211 02:45:31.545537  242970 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0211 02:45:31.824630  242970 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0211 02:45:31.935901  242970 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0211 02:45:32.049498  242970 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0211 02:45:32.049753  242970 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-794734 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0211 02:45:32.190228  242970 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0211 02:45:32.190444  242970 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-794734 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0211 02:45:32.341243  242970 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0211 02:45:32.468390  242970 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0211 02:45:32.641039  242970 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0211 02:45:32.641247  242970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 02:45:32.778340  242970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 02:45:32.911434  242970 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0211 02:45:32.998361  242970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 02:45:33.179260  242970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 02:45:33.394349  242970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 02:45:33.394918  242970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 02:45:33.397488  242970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 02:45:33.400358  242970 out.go:235]   - Booting up control plane ...
	I0211 02:45:33.400553  242970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 02:45:33.400734  242970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 02:45:33.400847  242970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 02:45:33.410663  242970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 02:45:33.417214  242970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 02:45:33.417299  242970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 02:45:33.513304  242970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0211 02:45:33.513448  242970 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0211 02:45:34.015075  242970 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.891411ms
	I0211 02:45:34.015219  242970 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0211 02:45:39.016561  242970 kubeadm.go:310] [api-check] The API server is healthy after 5.001293355s
	I0211 02:45:39.030208  242970 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0211 02:45:39.042295  242970 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0211 02:45:39.063453  242970 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0211 02:45:39.063670  242970 kubeadm.go:310] [mark-control-plane] Marking the node force-systemd-env-794734 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0211 02:45:39.073321  242970 kubeadm.go:310] [bootstrap-token] Using token: up5blx.urmdfjj3035lw4sj
	I0211 02:45:39.074995  242970 out.go:235]   - Configuring RBAC rules ...
	I0211 02:45:39.075152  242970 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0211 02:45:39.078655  242970 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0211 02:45:39.084944  242970 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0211 02:45:39.089154  242970 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0211 02:45:39.097044  242970 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0211 02:45:39.100469  242970 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0211 02:45:39.422688  242970 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0211 02:45:39.923126  242970 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0211 02:45:40.423589  242970 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0211 02:45:40.424706  242970 kubeadm.go:310] 
	I0211 02:45:40.424810  242970 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0211 02:45:40.424818  242970 kubeadm.go:310] 
	I0211 02:45:40.424949  242970 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0211 02:45:40.424968  242970 kubeadm.go:310] 
	I0211 02:45:40.425003  242970 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0211 02:45:40.425070  242970 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0211 02:45:40.425154  242970 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0211 02:45:40.425164  242970 kubeadm.go:310] 
	I0211 02:45:40.425236  242970 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0211 02:45:40.425246  242970 kubeadm.go:310] 
	I0211 02:45:40.425315  242970 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0211 02:45:40.425336  242970 kubeadm.go:310] 
	I0211 02:45:40.425430  242970 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0211 02:45:40.425542  242970 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0211 02:45:40.425646  242970 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0211 02:45:40.425655  242970 kubeadm.go:310] 
	I0211 02:45:40.425775  242970 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0211 02:45:40.425886  242970 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0211 02:45:40.425902  242970 kubeadm.go:310] 
	I0211 02:45:40.426013  242970 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token up5blx.urmdfjj3035lw4sj \
	I0211 02:45:40.426150  242970 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2585e5533b2c5436f5c33785db0dba3d71e3104cee8f0548f45ec36ce8746 \
	I0211 02:45:40.426204  242970 kubeadm.go:310] 	--control-plane 
	I0211 02:45:40.426214  242970 kubeadm.go:310] 
	I0211 02:45:40.426300  242970 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0211 02:45:40.426308  242970 kubeadm.go:310] 
	I0211 02:45:40.426376  242970 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token up5blx.urmdfjj3035lw4sj \
	I0211 02:45:40.426466  242970 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2585e5533b2c5436f5c33785db0dba3d71e3104cee8f0548f45ec36ce8746 
	I0211 02:45:40.429407  242970 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0211 02:45:40.429638  242970 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-gcp\n", err: exit status 1
	I0211 02:45:40.429734  242970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 02:45:40.429757  242970 cni.go:84] Creating CNI manager for ""
	I0211 02:45:40.429766  242970 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0211 02:45:40.431410  242970 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0211 02:45:40.433141  242970 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0211 02:45:40.437461  242970 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0211 02:45:40.437482  242970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0211 02:45:40.455375  242970 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0211 02:45:40.672619  242970 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0211 02:45:40.672678  242970 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:45:40.672752  242970 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes force-systemd-env-794734 minikube.k8s.io/updated_at=2025_02_11T02_45_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=8e91f70b9b442caa4bec80b031add390ac34d321 minikube.k8s.io/name=force-systemd-env-794734 minikube.k8s.io/primary=true
	I0211 02:45:40.768838  242970 ops.go:34] apiserver oom_adj: -16
	I0211 02:45:40.768917  242970 kubeadm.go:1113] duration metric: took 96.299612ms to wait for elevateKubeSystemPrivileges
	I0211 02:45:40.768953  242970 kubeadm.go:394] duration metric: took 9.91157936s to StartCluster
	I0211 02:45:40.768986  242970 settings.go:142] acquiring lock: {Name:mkab2b143b733b0f17bed345e030250b8d37f745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:45:40.769062  242970 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20400-12240/kubeconfig
	I0211 02:45:40.770029  242970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/kubeconfig: {Name:mk7d609b79772e5fa84ecd6d15f2188446c79bf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:45:40.770262  242970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0211 02:45:40.770293  242970 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0211 02:45:40.770364  242970 addons.go:69] Setting storage-provisioner=true in profile "force-systemd-env-794734"
	I0211 02:45:40.770384  242970 addons.go:238] Setting addon storage-provisioner=true in "force-systemd-env-794734"
	I0211 02:45:40.770406  242970 host.go:66] Checking if "force-systemd-env-794734" exists ...
	I0211 02:45:40.770272  242970 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 02:45:40.770440  242970 config.go:182] Loaded profile config "force-systemd-env-794734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:45:40.770489  242970 addons.go:69] Setting default-storageclass=true in profile "force-systemd-env-794734"
	I0211 02:45:40.770531  242970 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "force-systemd-env-794734"
	I0211 02:45:40.770879  242970 cli_runner.go:164] Run: docker container inspect force-systemd-env-794734 --format={{.State.Status}}
	I0211 02:45:40.771015  242970 cli_runner.go:164] Run: docker container inspect force-systemd-env-794734 --format={{.State.Status}}
	I0211 02:45:40.773630  242970 out.go:177] * Verifying Kubernetes components...
	I0211 02:45:40.775084  242970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0211 02:45:40.798401  242970 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error getting storagev1 interface client config: context "force-systemd-env-794734" does not exist : client config: context "force-systemd-env-794734" does not exist]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error getting storagev1 interface client config: context "force-systemd-env-794734" does not exist : client config: context "force-systemd-env-794734" does not exist]
	I0211 02:45:40.800087  242970 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 02:45:40.801366  242970 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 02:45:40.801380  242970 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0211 02:45:40.801426  242970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-794734
	I0211 02:45:40.821730  242970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33029 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/force-systemd-env-794734/id_rsa Username:docker}
	I0211 02:45:40.956524  242970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0211 02:45:41.026639  242970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 02:45:41.137368  242970 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 02:45:41.429702  242970 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	E0211 02:45:41.430092  242970 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: client: client config: context "force-systemd-env-794734" does not exist
	I0211 02:45:41.432135  242970 out.go:201] 
	W0211 02:45:41.434004  242970 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: kubernetes client: client config: client config: context "force-systemd-env-794734" does not exist
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: kubernetes client: client config: client config: context "force-systemd-env-794734" does not exist
	W0211 02:45:41.434025  242970 out.go:270] * 
	* 
	W0211 02:45:41.435090  242970 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0211 02:45:41.437232  242970 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-amd64 start -p force-systemd-env-794734 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-02-11 02:45:41.48072458 +0000 UTC m=+2640.685470744
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect force-systemd-env-794734
helpers_test.go:235: (dbg) docker inspect force-systemd-env-794734:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4d56b69224bdfd2115c0f92c0e859e33a17a57008b9b0115f5ff465f031c0542",
	        "Created": "2025-02-11T02:45:25.945183033Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-11T02:45:26.060230139Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a",
	        "ResolvConfPath": "/var/lib/docker/containers/4d56b69224bdfd2115c0f92c0e859e33a17a57008b9b0115f5ff465f031c0542/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4d56b69224bdfd2115c0f92c0e859e33a17a57008b9b0115f5ff465f031c0542/hostname",
	        "HostsPath": "/var/lib/docker/containers/4d56b69224bdfd2115c0f92c0e859e33a17a57008b9b0115f5ff465f031c0542/hosts",
	        "LogPath": "/var/lib/docker/containers/4d56b69224bdfd2115c0f92c0e859e33a17a57008b9b0115f5ff465f031c0542/4d56b69224bdfd2115c0f92c0e859e33a17a57008b9b0115f5ff465f031c0542-json.log",
	        "Name": "/force-systemd-env-794734",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "force-systemd-env-794734:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "force-systemd-env-794734",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/96787f238a476f01a409f469571fd6eb0e05272821c0423f6f59dd62bf77fe00-init/diff:/var/lib/docker/overlay2/de28131002c1cf3ac1375d9db63a3e00d2a843930d2c723033b62dc11010311c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/96787f238a476f01a409f469571fd6eb0e05272821c0423f6f59dd62bf77fe00/merged",
	                "UpperDir": "/var/lib/docker/overlay2/96787f238a476f01a409f469571fd6eb0e05272821c0423f6f59dd62bf77fe00/diff",
	                "WorkDir": "/var/lib/docker/overlay2/96787f238a476f01a409f469571fd6eb0e05272821c0423f6f59dd62bf77fe00/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-794734",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-794734/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-794734",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-794734",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-794734",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c985af5ac823a749a832e4ed7f73a7661abfbe61bb72c0cb8a94307b8202213c",
	            "SandboxKey": "/var/run/docker/netns/c985af5ac823",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33029"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33030"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33033"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33031"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33032"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-794734": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "8af6cb30723097feb76a0ae6e52bbab7f181b26e909c24ad6b50a0a021e8f3fc",
	                    "EndpointID": "3af355826438e960f1e1825e8c2c57de06f02905a5a3cd9947eeeeffb2bd1842",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-794734",
	                        "4d56b69224bd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p force-systemd-env-794734 -n force-systemd-env-794734
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p force-systemd-env-794734 -n force-systemd-env-794734: exit status 4 (366.676774ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0211 02:45:41.839274  249228 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-794734" does not appear in /home/jenkins/minikube-integration/20400-12240/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "force-systemd-env-794734" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "force-systemd-env-794734" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-794734
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-794734: (2.374719905s)
--- FAIL: TestForceSystemdEnv (23.90s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d1ef30b1-4788-4d42-8492-c893723a9bf8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00399697s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-149709 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-149709 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-149709 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-149709 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6ac8c511-1482-4312-b43d-188556df06b2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-149709 -n functional-149709
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-02-11 02:13:15.832059148 +0000 UTC m=+695.036805298
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-149709 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-149709 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-149709/192.168.49.2
Start Time:       Tue, 11 Feb 2025 02:10:15 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:  10.244.0.9
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8wfk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-x8wfk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  3m                default-scheduler  Successfully assigned default/sp-pod to functional-149709
Warning  Failed     88s               kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     88s               kubelet            Error: ErrImagePull
Normal   BackOff    87s               kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     87s               kubelet            Error: ImagePullBackOff
Normal   Pulling    75s (x2 over 3m)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-149709 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-149709 logs sp-pod -n default: exit status 1 (67.458851ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-149709 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-149709
helpers_test.go:235: (dbg) docker inspect functional-149709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142",
	        "Created": "2025-02-11T02:07:56.49585572Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42905,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-11T02:07:56.603418102Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a",
	        "ResolvConfPath": "/var/lib/docker/containers/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142/hostname",
	        "HostsPath": "/var/lib/docker/containers/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142/hosts",
	        "LogPath": "/var/lib/docker/containers/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142-json.log",
	        "Name": "/functional-149709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-149709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-149709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/774b0ac5a1eed4992955f93033c34e95b369681db7e0c238c1d7dcf09dff4833-init/diff:/var/lib/docker/overlay2/de28131002c1cf3ac1375d9db63a3e00d2a843930d2c723033b62dc11010311c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/774b0ac5a1eed4992955f93033c34e95b369681db7e0c238c1d7dcf09dff4833/merged",
	                "UpperDir": "/var/lib/docker/overlay2/774b0ac5a1eed4992955f93033c34e95b369681db7e0c238c1d7dcf09dff4833/diff",
	                "WorkDir": "/var/lib/docker/overlay2/774b0ac5a1eed4992955f93033c34e95b369681db7e0c238c1d7dcf09dff4833/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-149709",
	                "Source": "/var/lib/docker/volumes/functional-149709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-149709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-149709",
	                "name.minikube.sigs.k8s.io": "functional-149709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "51df9668d661fb8d3aac379d46caad7369396197e86e07065f479acae8fa8c0d",
	            "SandboxKey": "/var/run/docker/netns/51df9668d661",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-149709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2645b0c158dde57d083e408b613f3f93258649b1b014db7cd084427714b28b7b",
	                    "EndpointID": "0750791058d3fe9c90e4e46f52a3a3f02c0e80dc775b76ccc164c34a91af52e7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-149709",
	                        "07de83934a33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-149709 -n functional-149709
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-149709 logs -n 25: (1.437588261s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-149709 ssh findmnt         | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | -T /mount3                            |                   |         |         |                     |                     |
	| service        | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | service hello-node --url              |                   |         |         |                     |                     |
	|                | --format={{.IP}}                      |                   |         |         |                     |                     |
	| mount          | -p functional-149709                  | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC |                     |
	|                | --kill=true                           |                   |         |         |                     |                     |
	| addons         | functional-149709 addons list         | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	| tunnel         | functional-149709 tunnel              | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC |                     |
	|                | --alsologtostderr                     |                   |         |         |                     |                     |
	| service        | functional-149709 service             | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | hello-node --url                      |                   |         |         |                     |                     |
	| addons         | functional-149709 addons list         | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | -o json                               |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh sudo cat        | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC |                     |
	|                | /etc/ssl/certs/19028.pem              |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh sudo cat        | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | /usr/share/ca-certificates/19028.pem  |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh sudo cat        | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | /etc/ssl/certs/51391683.0             |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh sudo cat        | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | /etc/ssl/certs/190282.pem             |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh sudo cat        | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | /usr/share/ca-certificates/190282.pem |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh sudo cat        | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0             |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh sudo cat        | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | /etc/test/nested/copy/19028/hosts     |                   |         |         |                     |                     |
	| service        | functional-149709 service             | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | hello-node-connect --url              |                   |         |         |                     |                     |
	| image          | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | image ls --format short               |                   |         |         |                     |                     |
	|                | --alsologtostderr                     |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh pgrep           | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC |                     |
	|                | buildkitd                             |                   |         |         |                     |                     |
	| image          | functional-149709 image build -t      | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | localhost/my-image:functional-149709  |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr      |                   |         |         |                     |                     |
	| image          | functional-149709 image ls            | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	| image          | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | image ls --format yaml                |                   |         |         |                     |                     |
	|                | --alsologtostderr                     |                   |         |         |                     |                     |
	| image          | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | image ls --format json                |                   |         |         |                     |                     |
	|                | --alsologtostderr                     |                   |         |         |                     |                     |
	| image          | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | image ls --format table               |                   |         |         |                     |                     |
	|                | --alsologtostderr                     |                   |         |         |                     |                     |
	| update-context | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | update-context                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                |                   |         |         |                     |                     |
	| update-context | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | update-context                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                |                   |         |         |                     |                     |
	| update-context | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | update-context                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                |                   |         |         |                     |                     |
	|----------------|---------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/11 02:10:05
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0211 02:10:05.320272   54278 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:10:05.320648   54278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:10:05.320661   54278 out.go:358] Setting ErrFile to fd 2...
	I0211 02:10:05.320669   54278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:10:05.320988   54278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	I0211 02:10:05.321699   54278 out.go:352] Setting JSON to false
	I0211 02:10:05.323071   54278 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3154,"bootTime":1739236651,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:10:05.323204   54278 start.go:139] virtualization: kvm guest
	I0211 02:10:05.325736   54278 out.go:177] * [functional-149709] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 02:10:05.327477   54278 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 02:10:05.327481   54278 notify.go:220] Checking for updates...
	I0211 02:10:05.329056   54278 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:10:05.330527   54278 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	I0211 02:10:05.331831   54278 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	I0211 02:10:05.333144   54278 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 02:10:05.334339   54278 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 02:10:05.307212   54262 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:10:05.307858   54262 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:10:05.349332   54262 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0211 02:10:05.349478   54262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:10:05.413532   54262 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-11 02:10:05.40286377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:10:05.413681   54262 docker.go:318] overlay module found
	I0211 02:10:05.416347   54262 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0211 02:10:05.417751   54262 start.go:297] selected driver: docker
	I0211 02:10:05.417781   54262 start.go:901] validating driver "docker" against &{Name:functional-149709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-149709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:10:05.417911   54262 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 02:10:05.420732   54262 out.go:201] 
	W0211 02:10:05.422136   54262 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0211 02:10:05.423576   54262 out.go:201] 
	I0211 02:10:05.335902   54278 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:10:05.336489   54278 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:10:05.374932   54278 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0211 02:10:05.375033   54278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:10:05.450688   54278 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-11 02:10:05.428519969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:10:05.450832   54278 docker.go:318] overlay module found
	I0211 02:10:05.453001   54278 out.go:177] * Using the docker driver based on existing profile
	I0211 02:10:05.454582   54278 start.go:297] selected driver: docker
	I0211 02:10:05.454596   54278 start.go:901] validating driver "docker" against &{Name:functional-149709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-149709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:10:05.454697   54278 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 02:10:05.454799   54278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:10:05.526387   54278 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-11 02:10:05.512545396 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:10:05.527258   54278 cni.go:84] Creating CNI manager for ""
	I0211 02:10:05.527339   54278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0211 02:10:05.527405   54278 start.go:340] cluster config:
	{Name:functional-149709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-149709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:10:05.529484   54278 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Feb 11 02:10:35 functional-149709 crio[5428]: time="2025-02-11 02:10:35.100489269Z" level=info msg="Removed pod sandbox: 83ccf40d323ca79d2ced724ff269dcb8fbb90ce5a972ab7f705129b6c51240b0" id=b3fc1916-a632-494c-839c-cb9d3dce1080 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 11 02:10:35 functional-149709 crio[5428]: time="2025-02-11 02:10:35.100963209Z" level=info msg="Stopping pod sandbox: 609c74fa27b7243626856d375313745804960eb55cafa1d10f5309e373ee5254" id=8a3e5c8e-2964-40c5-b5d0-997b52602517 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 11 02:10:35 functional-149709 crio[5428]: time="2025-02-11 02:10:35.100998117Z" level=info msg="Stopped pod sandbox (already stopped): 609c74fa27b7243626856d375313745804960eb55cafa1d10f5309e373ee5254" id=8a3e5c8e-2964-40c5-b5d0-997b52602517 name=/runtime.v1.RuntimeService/StopPodSandbox
	Feb 11 02:10:35 functional-149709 crio[5428]: time="2025-02-11 02:10:35.101308333Z" level=info msg="Removing pod sandbox: 609c74fa27b7243626856d375313745804960eb55cafa1d10f5309e373ee5254" id=75a63f7d-7025-4896-b9d0-ae93d8ea1ce3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 11 02:10:35 functional-149709 crio[5428]: time="2025-02-11 02:10:35.106854225Z" level=info msg="Removed pod sandbox: 609c74fa27b7243626856d375313745804960eb55cafa1d10f5309e373ee5254" id=75a63f7d-7025-4896-b9d0-ae93d8ea1ce3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Feb 11 02:11:16 functional-149709 crio[5428]: time="2025-02-11 02:11:16.856794977Z" level=info msg="Pulling image: docker.io/nginx:latest" id=a7524e68-24d8-4445-a458-5ae9f48b492c name=/runtime.v1.ImageService/PullImage
	Feb 11 02:11:16 functional-149709 crio[5428]: time="2025-02-11 02:11:16.861064767Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Feb 11 02:11:17 functional-149709 crio[5428]: time="2025-02-11 02:11:17.283109627Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=ce330ff6-6432-4f38-be83-e67a9b26034e name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:11:17 functional-149709 crio[5428]: time="2025-02-11 02:11:17.283321184Z" level=info msg="Image docker.io/nginx:alpine not found" id=ce330ff6-6432-4f38-be83-e67a9b26034e name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:11:27 functional-149709 crio[5428]: time="2025-02-11 02:11:27.849210174Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=0d8c357d-f0f1-4d86-b930-d2d1e990c9ba name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:11:27 functional-149709 crio[5428]: time="2025-02-11 02:11:27.849507373Z" level=info msg="Image docker.io/nginx:alpine not found" id=0d8c357d-f0f1-4d86-b930-d2d1e990c9ba name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:11:47 functional-149709 crio[5428]: time="2025-02-11 02:11:47.481619985Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=9e602033-d372-4c7c-8fe3-1067a1b43122 name=/runtime.v1.ImageService/PullImage
	Feb 11 02:11:47 functional-149709 crio[5428]: time="2025-02-11 02:11:47.511750552Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Feb 11 02:12:18 functional-149709 crio[5428]: time="2025-02-11 02:12:18.146793384Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=80bda07f-1f0d-4a50-825f-9bae24ba3639 name=/runtime.v1.ImageService/PullImage
	Feb 11 02:12:18 functional-149709 crio[5428]: time="2025-02-11 02:12:18.150304891Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Feb 11 02:12:18 functional-149709 crio[5428]: time="2025-02-11 02:12:18.410186638Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=be468915-7ac4-4979-9a4f-896a12f7603b name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:12:18 functional-149709 crio[5428]: time="2025-02-11 02:12:18.410407385Z" level=info msg="Image docker.io/mysql:5.7 not found" id=be468915-7ac4-4979-9a4f-896a12f7603b name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:12:30 functional-149709 crio[5428]: time="2025-02-11 02:12:30.848934580Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=b13a1fa0-cfcc-4694-85cb-830a2baa259d name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:12:30 functional-149709 crio[5428]: time="2025-02-11 02:12:30.849287244Z" level=info msg="Image docker.io/mysql:5.7 not found" id=b13a1fa0-cfcc-4694-85cb-830a2baa259d name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:12:48 functional-149709 crio[5428]: time="2025-02-11 02:12:48.771419021Z" level=info msg="Pulling image: docker.io/nginx:latest" id=504219f4-c5ec-491a-9750-93e152d478eb name=/runtime.v1.ImageService/PullImage
	Feb 11 02:12:48 functional-149709 crio[5428]: time="2025-02-11 02:12:48.775714318Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Feb 11 02:13:03 functional-149709 crio[5428]: time="2025-02-11 02:13:03.849632086Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=a8d21c41-4ad9-4079-9bf1-45df1b023baa name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:13:03 functional-149709 crio[5428]: time="2025-02-11 02:13:03.849889846Z" level=info msg="Image docker.io/nginx:alpine not found" id=a8d21c41-4ad9-4079-9bf1-45df1b023baa name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:13:15 functional-149709 crio[5428]: time="2025-02-11 02:13:15.849933433Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=df513a96-21af-4ede-a389-17416b867b45 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:13:15 functional-149709 crio[5428]: time="2025-02-11 02:13:15.850176267Z" level=info msg="Image docker.io/nginx:alpine not found" id=df513a96-21af-4ede-a389-17416b867b45 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	7e7f8ba305b43       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 3 minutes ago       Running             echoserver                  0                   fffa070ac638f       hello-node-connect-58f9cf68d8-bvxj9
	67f64cc560215       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   3 minutes ago       Running             dashboard-metrics-scraper   0                   95bec25c3639a       dashboard-metrics-scraper-5d59dccf9b-r64gk
	443f917895108       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         3 minutes ago       Running             kubernetes-dashboard        0                   60294ac515e46       kubernetes-dashboard-7779f9b69b-hhfxn
	f8f2ca9ddefae       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              3 minutes ago       Exited              mount-munger                0                   51933d22a29f8       busybox-mount
	347f3acd039f3       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago       Running             echoserver                  0                   46b85f3d5b73f       hello-node-fcfd88b6f-fn66v
	ee4aca1380c7a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago       Running             coredns                     3                   3332ca8daec6e       coredns-668d6bf9bc-dhfq8
	927003a362441       d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56                                                 3 minutes ago       Running             kindnet-cni                 3                   58081c3fc053a       kindnet-s67cs
	af88c17c1d865       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago       Running             storage-provisioner         3                   a89530c237133       storage-provisioner
	b1489888dc245       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                 3 minutes ago       Running             kube-proxy                  3                   93ca34f2cbc3b       kube-proxy-z65nc
	724750e44ce8c       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                 3 minutes ago       Running             kube-apiserver              0                   91cb17a4d5f3e       kube-apiserver-functional-149709
	b2919d4a128dd       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                 3 minutes ago       Running             kube-scheduler              3                   48c711a00a3da       kube-scheduler-functional-149709
	a81f56da2835a       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                 3 minutes ago       Running             kube-controller-manager     3                   9a2d28d996f60       kube-controller-manager-functional-149709
	13f6f2d25afc4       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 3 minutes ago       Running             etcd                        3                   e5bac45762607       etcd-functional-149709
	84de15df033ca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago       Exited              coredns                     2                   3332ca8daec6e       coredns-668d6bf9bc-dhfq8
	ea9cf7d369c96       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago       Exited              storage-provisioner         2                   a89530c237133       storage-provisioner
	ae4d3eef6bbe1       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 4 minutes ago       Exited              etcd                        2                   e5bac45762607       etcd-functional-149709
	c821bc7e1ee3c       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                 4 minutes ago       Exited              kube-scheduler              2                   48c711a00a3da       kube-scheduler-functional-149709
	db71bc29b9056       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                 4 minutes ago       Exited              kube-proxy                  2                   93ca34f2cbc3b       kube-proxy-z65nc
	55bc471731df9       d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56                                                 4 minutes ago       Exited              kindnet-cni                 2                   58081c3fc053a       kindnet-s67cs
	c93bcaac3d750       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                 4 minutes ago       Exited              kube-controller-manager     2                   9a2d28d996f60       kube-controller-manager-functional-149709
	
	
	==> coredns [84de15df033cad9be45d1fb4f1bb8202419d7b27eebbda814ad9a51770b7d1ad] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40915 - 41543 "HINFO IN 7172967796518925767.611749124021480384. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.030138844s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ee4aca1380c7a15be8adde30e48d3fa13cbce13d70087f4b8018e38ec6f7d59d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51854 - 32205 "HINFO IN 5883833411052480777.7266958046914092204. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027584315s
	
	
	==> describe nodes <==
	Name:               functional-149709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-149709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e91f70b9b442caa4bec80b031add390ac34d321
	                    minikube.k8s.io/name=functional-149709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_11T02_08_13_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 11 Feb 2025 02:08:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-149709
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 11 Feb 2025 02:13:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 11 Feb 2025 02:10:39 +0000   Tue, 11 Feb 2025 02:08:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 11 Feb 2025 02:10:39 +0000   Tue, 11 Feb 2025 02:08:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 11 Feb 2025 02:10:39 +0000   Tue, 11 Feb 2025 02:08:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 11 Feb 2025 02:10:39 +0000   Tue, 11 Feb 2025 02:08:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-149709
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 24da52d556764f96a708105f328eb55b
	  System UUID:                92230bdf-e911-4103-b800-1f0e670ce984
	  Boot ID:                    144975d8-f0ab-4312-b95d-86c41201d6b3
	  Kernel Version:             5.15.0-1075-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-bvxj9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  default                     hello-node-fcfd88b6f-fn66v                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  default                     mysql-58ccfd96bb-7rqk9                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     3m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-668d6bf9bc-dhfq8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m
	  kube-system                 etcd-functional-149709                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m5s
	  kube-system                 kindnet-s67cs                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m1s
	  kube-system                 kube-apiserver-functional-149709              250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 kube-controller-manager-functional-149709     200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-proxy-z65nc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-scheduler-functional-149709              100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-r64gk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-hhfxn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m58s                  kube-proxy       
	  Normal   Starting                 3m37s                  kube-proxy       
	  Normal   Starting                 4m18s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m10s (x8 over 5m10s)  kubelet          Node functional-149709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m10s (x8 over 5m10s)  kubelet          Node functional-149709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m10s (x8 over 5m10s)  kubelet          Node functional-149709 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     5m5s                   kubelet          Node functional-149709 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 5m5s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m5s                   kubelet          Node functional-149709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m5s                   kubelet          Node functional-149709 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m5s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m1s                   node-controller  Node functional-149709 event: Registered Node functional-149709 in Controller
	  Normal   NodeReady                4m46s                  kubelet          Node functional-149709 status is now: NodeReady
	  Normal   RegisteredNode           4m17s                  node-controller  Node functional-149709 event: Registered Node functional-149709 in Controller
	  Normal   Starting                 3m43s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m43s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m42s (x8 over 3m42s)  kubelet          Node functional-149709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m42s (x8 over 3m42s)  kubelet          Node functional-149709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m42s (x8 over 3m42s)  kubelet          Node functional-149709 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m36s                  node-controller  Node functional-149709 event: Registered Node functional-149709 in Controller
	
	
	==> dmesg <==
	[  +0.635975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022896] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.252830] kauditd_printk_skb: 46 callbacks suppressed
	[Feb11 02:04] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[  +1.011720] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[  +2.015838] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[  +4.163567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[  +8.187242] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[Feb11 02:05] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[ +33.280901] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[Feb11 02:10] FS-Cache: Duplicate cookie detected
	[  +0.004731] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006748] FS-Cache: O-cookie d=0000000072d28f37{9P.session} n=000000007e783556
	[  +0.007525] FS-Cache: O-key=[10] '34323935363832373232'
	[  +0.005372] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006609] FS-Cache: N-cookie d=0000000072d28f37{9P.session} n=00000000b78876c7
	[  +0.007524] FS-Cache: N-key=[10] '34323935363832373232'
	[ +11.887533] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [13f6f2d25afc48314e920461dcf24b65df96a31eb763d5769ab5d42a96445da9] <==
	{"level":"info","ts":"2025-02-11T02:09:35.743070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2025-02-11T02:09:35.743140Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2025-02-11T02:09:35.743245Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-11T02:09:35.743285Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-11T02:09:35.744776Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-11T02:09:35.745052Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-11T02:09:35.745094Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-11T02:09:35.745213Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-11T02:09:35.745231Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-11T02:09:36.734523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-11T02:09:36.734627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-11T02:09:36.734662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-11T02:09:36.734685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2025-02-11T02:09:36.734698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-02-11T02:09:36.734709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2025-02-11T02:09:36.734726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-02-11T02:09:36.737356Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-149709 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-11T02:09:36.737370Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-11T02:09:36.737376Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-11T02:09:36.737580Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-11T02:09:36.737603Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-11T02:09:36.738207Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-11T02:09:36.738396Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-11T02:09:36.739045Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-11T02:09:36.739211Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> etcd [ae4d3eef6bbe171f428c3c666d75dc4dcaf7da5231e983d379d87ed49a826c80] <==
	{"level":"info","ts":"2025-02-11T02:08:56.231198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-11T02:08:56.231222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-02-11T02:08:56.231237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2025-02-11T02:08:56.231244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-11T02:08:56.231277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2025-02-11T02:08:56.231286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-11T02:08:56.232508Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-149709 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-11T02:08:56.232549Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-11T02:08:56.232584Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-11T02:08:56.232810Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-11T02:08:56.232842Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-11T02:08:56.233302Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-11T02:08:56.233534Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-11T02:08:56.233868Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-11T02:08:56.234599Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-11T02:09:18.169491Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-11T02:09:18.169573Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-149709","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2025-02-11T02:09:18.169682Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-11T02:09:18.169783Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-11T02:09:18.179760Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-11T02:09:18.179872Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-11T02:09:18.179964Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-02-11T02:09:18.182611Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-11T02:09:18.182700Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-11T02:09:18.182740Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-149709","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 02:13:17 up 55 min,  0 users,  load average: 0.44, 0.61, 0.40
	Linux functional-149709 5.15.0-1075-gcp #84~20.04.1-Ubuntu SMP Thu Jan 16 20:44:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [55bc471731df9c7b9890eb0f905cd6dc2cbd18872df14b110fd858aee525f6e7] <==
	I0211 02:08:54.225156       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0211 02:08:54.316602       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0211 02:08:54.316896       1 main.go:148] setting mtu 1500 for CNI 
	I0211 02:08:54.316964       1 main.go:178] kindnetd IP family: "ipv4"
	I0211 02:08:54.317002       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0211 02:08:54.716770       1 controller.go:361] Starting controller kube-network-policies
	I0211 02:08:54.718251       1 controller.go:365] Waiting for informer caches to sync
	I0211 02:08:54.718269       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0211 02:08:57.419191       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0211 02:08:57.419279       1 metrics.go:61] Registering metrics
	I0211 02:08:57.424776       1 controller.go:401] Syncing nftables rules
	I0211 02:09:04.717279       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:09:04.717365       1 main.go:301] handling current node
	I0211 02:09:14.717171       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:09:14.717243       1 main.go:301] handling current node
	
	
	==> kindnet [927003a36244161fac9082127320d90644d45a793d96e9b958594b349f4b8be6] <==
	I0211 02:11:09.818717       1 main.go:301] handling current node
	I0211 02:11:19.816489       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:11:19.816530       1 main.go:301] handling current node
	I0211 02:11:29.818187       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:11:29.818218       1 main.go:301] handling current node
	I0211 02:11:39.816473       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:11:39.816503       1 main.go:301] handling current node
	I0211 02:11:49.817281       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:11:49.817329       1 main.go:301] handling current node
	I0211 02:11:59.819213       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:11:59.819253       1 main.go:301] handling current node
	I0211 02:12:09.816851       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:12:09.816966       1 main.go:301] handling current node
	I0211 02:12:19.817752       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:12:19.817806       1 main.go:301] handling current node
	I0211 02:12:29.817321       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:12:29.817368       1 main.go:301] handling current node
	I0211 02:12:39.816743       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:12:39.816780       1 main.go:301] handling current node
	I0211 02:12:49.816508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:12:49.816543       1 main.go:301] handling current node
	I0211 02:12:59.825611       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:12:59.825643       1 main.go:301] handling current node
	I0211 02:13:09.820193       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:13:09.820226       1 main.go:301] handling current node
	
	
	==> kube-apiserver [724750e44ce8cae9513bd9c08e17de0e9d59da3cef9127a5d037e85aad308969] <==
	I0211 02:09:37.918436       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0211 02:09:37.918454       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0211 02:09:37.918772       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0211 02:09:37.918379       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0211 02:09:37.919285       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0211 02:09:37.921725       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0211 02:09:37.926318       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0211 02:09:37.926734       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0211 02:09:38.044957       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0211 02:09:38.792942       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0211 02:09:39.797125       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0211 02:09:39.934680       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0211 02:09:39.984590       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0211 02:09:39.989886       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0211 02:09:41.142559       1 controller.go:615] quota admission added evaluator for: endpoints
	I0211 02:09:41.243743       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0211 02:09:41.441990       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0211 02:09:59.195401       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.164.177"}
	I0211 02:10:03.301801       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.74.46"}
	I0211 02:10:07.567102       1 controller.go:615] quota admission added evaluator for: namespaces
	I0211 02:10:07.755621       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.48.94"}
	I0211 02:10:07.818475       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.109.79"}
	I0211 02:10:15.441505       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.231.165"}
	I0211 02:10:15.817604       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.252.244"}
	I0211 02:10:17.306946       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.248.57"}
	
	
	==> kube-controller-manager [a81f56da2835a15db3814607908ab32c9b08c5346dc59c0201aa8a547354616e] <==
	E0211 02:10:07.655426       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b\" failed with pods \"dashboard-metrics-scraper-5d59dccf9b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0211 02:10:07.724174       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="61.893374ms"
	I0211 02:10:07.729652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="32.760726ms"
	I0211 02:10:07.733085       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="8.862906ms"
	I0211 02:10:07.733176       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="45.539µs"
	I0211 02:10:07.743833       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="82.739µs"
	I0211 02:10:07.744414       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="14.712165ms"
	I0211 02:10:07.756827       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="12.36879ms"
	I0211 02:10:07.756934       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="64.52µs"
	I0211 02:10:13.169152       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="9.517141ms"
	I0211 02:10:13.169822       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="98.053µs"
	I0211 02:10:15.173254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="7.528054ms"
	I0211 02:10:15.173353       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="55.829µs"
	I0211 02:10:15.710033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="11.649649ms"
	I0211 02:10:15.717905       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="7.820961ms"
	I0211 02:10:15.718093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="57.633µs"
	I0211 02:10:16.173661       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="5.290043ms"
	I0211 02:10:16.173763       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="53.809µs"
	I0211 02:10:17.357185       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="13.574549ms"
	I0211 02:10:17.362177       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="4.946891ms"
	I0211 02:10:17.362284       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="64.67µs"
	I0211 02:10:17.363702       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="77.12µs"
	I0211 02:10:39.186393       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-149709"
	I0211 02:12:18.419545       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="75.056µs"
	I0211 02:12:30.858473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="60.058µs"
	
	
	==> kube-controller-manager [c93bcaac3d7500b104deb63dc6718790d663f7a286c9babe2ba24074581f9b35] <==
	I0211 02:09:00.473768       1 shared_informer.go:320] Caches are synced for daemon sets
	I0211 02:09:00.473818       1 shared_informer.go:320] Caches are synced for namespace
	I0211 02:09:00.473849       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0211 02:09:00.473916       1 shared_informer.go:320] Caches are synced for disruption
	I0211 02:09:00.475090       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0211 02:09:00.475131       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0211 02:09:00.475156       1 shared_informer.go:320] Caches are synced for job
	I0211 02:09:00.475189       1 shared_informer.go:320] Caches are synced for ephemeral
	I0211 02:09:00.478956       1 shared_informer.go:320] Caches are synced for resource quota
	I0211 02:09:00.480070       1 shared_informer.go:320] Caches are synced for node
	I0211 02:09:00.480142       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0211 02:09:00.480157       1 shared_informer.go:320] Caches are synced for HPA
	I0211 02:09:00.480190       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0211 02:09:00.480201       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0211 02:09:00.480208       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0211 02:09:00.480260       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-149709"
	I0211 02:09:00.481032       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0211 02:09:00.493301       1 shared_informer.go:320] Caches are synced for garbage collector
	I0211 02:09:00.782830       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="307.64995ms"
	I0211 02:09:00.782956       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="80.187µs"
	I0211 02:09:04.362924       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-149709"
	I0211 02:09:14.546198       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-149709"
	I0211 02:09:14.877972       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="120.207µs"
	I0211 02:09:14.897982       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="7.680941ms"
	I0211 02:09:14.898075       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="57.865µs"
	
	
	==> kube-proxy [b1489888dc24549498bf1cc78fb07dc7f06dc42d60e2b39f651c2a9142215b93] <==
	I0211 02:09:39.257115       1 server_linux.go:66] "Using iptables proxy"
	I0211 02:09:39.399771       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0211 02:09:39.399833       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0211 02:09:39.419929       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0211 02:09:39.419990       1 server_linux.go:170] "Using iptables Proxier"
	I0211 02:09:39.421894       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0211 02:09:39.422247       1 server.go:497] "Version info" version="v1.32.1"
	I0211 02:09:39.422286       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0211 02:09:39.423640       1 config.go:199] "Starting service config controller"
	I0211 02:09:39.423682       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0211 02:09:39.423693       1 config.go:105] "Starting endpoint slice config controller"
	I0211 02:09:39.423713       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0211 02:09:39.423771       1 config.go:329] "Starting node config controller"
	I0211 02:09:39.423782       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0211 02:09:39.523916       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0211 02:09:39.523950       1 shared_informer.go:320] Caches are synced for service config
	I0211 02:09:39.523939       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [db71bc29b9056266eecb06c5d46a675313e7d2ee4ce05efabfd92321a335ad7a] <==
	I0211 02:08:54.246925       1 server_linux.go:66] "Using iptables proxy"
	E0211 02:08:57.227933       1 server.go:687] "Failed to retrieve node info" err="nodes \"functional-149709\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope"
	I0211 02:08:58.286204       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0211 02:08:58.286284       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0211 02:08:58.309313       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0211 02:08:58.309374       1 server_linux.go:170] "Using iptables Proxier"
	I0211 02:08:58.311629       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0211 02:08:58.312077       1 server.go:497] "Version info" version="v1.32.1"
	I0211 02:08:58.312097       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0211 02:08:58.314912       1 config.go:199] "Starting service config controller"
	I0211 02:08:58.314935       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0211 02:08:58.314964       1 config.go:105] "Starting endpoint slice config controller"
	I0211 02:08:58.314968       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0211 02:08:58.314986       1 config.go:329] "Starting node config controller"
	I0211 02:08:58.314989       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0211 02:08:58.415379       1 shared_informer.go:320] Caches are synced for node config
	I0211 02:08:58.415420       1 shared_informer.go:320] Caches are synced for service config
	I0211 02:08:58.415437       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b2919d4a128ddec1fedb004202b98aa018123cfa8d4391368f901a81a07abd64] <==
	I0211 02:09:36.457926       1 serving.go:386] Generated self-signed cert in-memory
	W0211 02:09:37.917294       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0211 02:09:37.917333       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0211 02:09:37.917345       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0211 02:09:37.917355       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0211 02:09:37.932425       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0211 02:09:37.932456       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0211 02:09:37.934891       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0211 02:09:37.934968       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0211 02:09:37.935197       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0211 02:09:37.935325       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0211 02:09:38.036079       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c821bc7e1ee3c1fe1d6b3b8d08f247cad0405eeb1de4bbbf133b5c65e28a3cce] <==
	I0211 02:08:54.964479       1 serving.go:386] Generated self-signed cert in-memory
	W0211 02:08:57.175522       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0211 02:08:57.175650       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0211 02:08:57.175691       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0211 02:08:57.175755       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0211 02:08:57.426672       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0211 02:08:57.426789       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0211 02:08:57.429835       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0211 02:08:57.429905       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0211 02:08:57.430009       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0211 02:08:57.430128       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0211 02:08:57.530563       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0211 02:09:18.171239       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0211 02:09:18.171322       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0211 02:09:18.171468       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 11 02:12:35 functional-149709 kubelet[5790]: E0211 02:12:35.061347    5790 manager.go:1116] Failed to create existing container: /docker/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142/crio-48c711a00a3da06a462434b62162a6e1ac84c5536b414a33c1beb435db9ef0a5: Error finding container 48c711a00a3da06a462434b62162a6e1ac84c5536b414a33c1beb435db9ef0a5: Status 404 returned error can't find the container with id 48c711a00a3da06a462434b62162a6e1ac84c5536b414a33c1beb435db9ef0a5
	Feb 11 02:12:35 functional-149709 kubelet[5790]: E0211 02:12:35.061533    5790 manager.go:1116] Failed to create existing container: /crio-9a2d28d996f60f84eaadc42b15a61952f5e59994c24daa4345e57a8272ca0681: Error finding container 9a2d28d996f60f84eaadc42b15a61952f5e59994c24daa4345e57a8272ca0681: Status 404 returned error can't find the container with id 9a2d28d996f60f84eaadc42b15a61952f5e59994c24daa4345e57a8272ca0681
	Feb 11 02:12:35 functional-149709 kubelet[5790]: E0211 02:12:35.061696    5790 manager.go:1116] Failed to create existing container: /crio-93ca34f2cbc3b8a17e30b3fd3810fc5a3cfa544a0f8b82d5bb843ba771e09add: Error finding container 93ca34f2cbc3b8a17e30b3fd3810fc5a3cfa544a0f8b82d5bb843ba771e09add: Status 404 returned error can't find the container with id 93ca34f2cbc3b8a17e30b3fd3810fc5a3cfa544a0f8b82d5bb843ba771e09add
	Feb 11 02:12:35 functional-149709 kubelet[5790]: E0211 02:12:35.061871    5790 manager.go:1116] Failed to create existing container: /crio-a89530c23713348363b0f58824dbcef83e38d2f7216ebeb1aea2f83eb2343b9b: Error finding container a89530c23713348363b0f58824dbcef83e38d2f7216ebeb1aea2f83eb2343b9b: Status 404 returned error can't find the container with id a89530c23713348363b0f58824dbcef83e38d2f7216ebeb1aea2f83eb2343b9b
	Feb 11 02:12:35 functional-149709 kubelet[5790]: E0211 02:12:35.062065    5790 manager.go:1116] Failed to create existing container: /docker/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142/crio-e5bac457626078205f714c6d076214956d5e0370519256508bba635b460e134a: Error finding container e5bac457626078205f714c6d076214956d5e0370519256508bba635b460e134a: Status 404 returned error can't find the container with id e5bac457626078205f714c6d076214956d5e0370519256508bba635b460e134a
	Feb 11 02:12:35 functional-149709 kubelet[5790]: E0211 02:12:35.062261    5790 manager.go:1116] Failed to create existing container: /docker/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142/crio-a89530c23713348363b0f58824dbcef83e38d2f7216ebeb1aea2f83eb2343b9b: Error finding container a89530c23713348363b0f58824dbcef83e38d2f7216ebeb1aea2f83eb2343b9b: Status 404 returned error can't find the container with id a89530c23713348363b0f58824dbcef83e38d2f7216ebeb1aea2f83eb2343b9b
	Feb 11 02:12:35 functional-149709 kubelet[5790]: E0211 02:12:35.062438    5790 manager.go:1116] Failed to create existing container: /crio-58081c3fc053a9608fffb35c420eda4a1e43d0b2429e9ebb1ae9259c085e5dc9: Error finding container 58081c3fc053a9608fffb35c420eda4a1e43d0b2429e9ebb1ae9259c085e5dc9: Status 404 returned error can't find the container with id 58081c3fc053a9608fffb35c420eda4a1e43d0b2429e9ebb1ae9259c085e5dc9
	Feb 11 02:12:35 functional-149709 kubelet[5790]: E0211 02:12:35.062598    5790 manager.go:1116] Failed to create existing container: /crio-83ccf40d323ca79d2ced724ff269dcb8fbb90ce5a972ab7f705129b6c51240b0: Error finding container 83ccf40d323ca79d2ced724ff269dcb8fbb90ce5a972ab7f705129b6c51240b0: Status 404 returned error can't find the container with id 83ccf40d323ca79d2ced724ff269dcb8fbb90ce5a972ab7f705129b6c51240b0
	Feb 11 02:12:35 functional-149709 kubelet[5790]: E0211 02:12:35.062777    5790 manager.go:1116] Failed to create existing container: /crio-3332ca8daec6e0eb7bb6fe55ac6d0765ef558715830dd6c1e4a57d102e4d5d89: Error finding container 3332ca8daec6e0eb7bb6fe55ac6d0765ef558715830dd6c1e4a57d102e4d5d89: Status 404 returned error can't find the container with id 3332ca8daec6e0eb7bb6fe55ac6d0765ef558715830dd6c1e4a57d102e4d5d89
	Feb 11 02:12:35 functional-149709 kubelet[5790]: E0211 02:12:35.062961    5790 manager.go:1116] Failed to create existing container: /docker/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142/crio-58081c3fc053a9608fffb35c420eda4a1e43d0b2429e9ebb1ae9259c085e5dc9: Error finding container 58081c3fc053a9608fffb35c420eda4a1e43d0b2429e9ebb1ae9259c085e5dc9: Status 404 returned error can't find the container with id 58081c3fc053a9608fffb35c420eda4a1e43d0b2429e9ebb1ae9259c085e5dc9
	Feb 11 02:12:35 functional-149709 kubelet[5790]: E0211 02:12:35.141168    5790 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239955140998067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:12:35 functional-149709 kubelet[5790]: E0211 02:12:35.141208    5790 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239955140998067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:12:45 functional-149709 kubelet[5790]: E0211 02:12:45.142793    5790 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239965142627864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:12:45 functional-149709 kubelet[5790]: E0211 02:12:45.142827    5790 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239965142627864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:12:48 functional-149709 kubelet[5790]: E0211 02:12:48.771016    5790 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Feb 11 02:12:48 functional-149709 kubelet[5790]: E0211 02:12:48.771083    5790 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Feb 11 02:12:48 functional-149709 kubelet[5790]: E0211 02:12:48.771308    5790 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6tlxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-sv
c_default(6720f745-8fe7-4fa3-b048-17068b1b53da): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Feb 11 02:12:48 functional-149709 kubelet[5790]: E0211 02:12:48.772554    5790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6720f745-8fe7-4fa3-b048-17068b1b53da"
	Feb 11 02:12:55 functional-149709 kubelet[5790]: E0211 02:12:55.144015    5790 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239975143863586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:12:55 functional-149709 kubelet[5790]: E0211 02:12:55.144055    5790 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239975143863586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:13:03 functional-149709 kubelet[5790]: E0211 02:13:03.850223    5790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6720f745-8fe7-4fa3-b048-17068b1b53da"
	Feb 11 02:13:05 functional-149709 kubelet[5790]: E0211 02:13:05.145559    5790 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239985145367212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:13:05 functional-149709 kubelet[5790]: E0211 02:13:05.145594    5790 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239985145367212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:13:15 functional-149709 kubelet[5790]: E0211 02:13:15.147055    5790 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239995146882101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:13:15 functional-149709 kubelet[5790]: E0211 02:13:15.147086    5790 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239995146882101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [443f9178951085086e8a084dab2223084c1e9c24e79675132764e7675237edaa] <==
	2025/02/11 02:10:12 Using namespace: kubernetes-dashboard
	2025/02/11 02:10:12 Using in-cluster config to connect to apiserver
	2025/02/11 02:10:12 Using secret token for csrf signing
	2025/02/11 02:10:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/02/11 02:10:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/02/11 02:10:12 Successful initial request to the apiserver, version: v1.32.1
	2025/02/11 02:10:12 Generating JWE encryption key
	2025/02/11 02:10:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/02/11 02:10:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/02/11 02:10:12 Initializing JWE encryption key from synchronized object
	2025/02/11 02:10:12 Creating in-cluster Sidecar client
	2025/02/11 02:10:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/11 02:10:12 Serving insecurely on HTTP port: 9090
	2025/02/11 02:10:42 Successful request to sidecar
	2025/02/11 02:10:12 Starting overwatch
	
	
	==> storage-provisioner [af88c17c1d865884cce3ed5ef47a3343d08fd32d984fa4603d9b3f55072a7194] <==
	I0211 02:09:39.220321       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0211 02:09:39.230587       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0211 02:09:39.230635       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0211 02:09:56.627365       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0211 02:09:56.627504       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-149709_815722e1-41d8-4d0d-9192-3989829f94b1!
	I0211 02:09:56.627505       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"257d8941-0b1d-4bde-b679-b11757715a03", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-149709_815722e1-41d8-4d0d-9192-3989829f94b1 became leader
	I0211 02:09:56.728134       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-149709_815722e1-41d8-4d0d-9192-3989829f94b1!
	I0211 02:10:15.273687       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0211 02:10:15.275014       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c22aa53e-00b0-4d6d-829c-5c40e426662e", APIVersion:"v1", ResourceVersion:"836", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0211 02:10:15.274634       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    bb47fe95-8afe-4c64-a347-0f7c7d4c022c 379 0 2025-02-11 02:08:18 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-02-11 02:08:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-c22aa53e-00b0-4d6d-829c-5c40e426662e &PersistentVolumeClaim{ObjectMeta:{myclaim  default  c22aa53e-00b0-4d6d-829c-5c40e426662e 836 0 2025-02-11 02:10:15 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-02-11 02:10:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-02-11 02:10:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0211 02:10:15.275349       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-c22aa53e-00b0-4d6d-829c-5c40e426662e" provisioned
	I0211 02:10:15.275421       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0211 02:10:15.275479       1 volume_store.go:212] Trying to save persistentvolume "pvc-c22aa53e-00b0-4d6d-829c-5c40e426662e"
	I0211 02:10:15.287239       1 volume_store.go:219] persistentvolume "pvc-c22aa53e-00b0-4d6d-829c-5c40e426662e" saved
	I0211 02:10:15.287571       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c22aa53e-00b0-4d6d-829c-5c40e426662e", APIVersion:"v1", ResourceVersion:"836", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-c22aa53e-00b0-4d6d-829c-5c40e426662e
	
	
	==> storage-provisioner [ea9cf7d369c969e5367cec8df8404f9b3f9d504cd681ca57ddc36cd4fae36be2] <==
	I0211 02:08:56.629101       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0211 02:08:57.427149       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0211 02:08:57.427239       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0211 02:09:14.825367       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0211 02:09:14.825479       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"257d8941-0b1d-4bde-b679-b11757715a03", APIVersion:"v1", ResourceVersion:"567", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-149709_aa0ff826-07ef-47a0-a974-fc2cfc019916 became leader
	I0211 02:09:14.825574       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-149709_aa0ff826-07ef-47a0-a974-fc2cfc019916!
	I0211 02:09:14.926152       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-149709_aa0ff826-07ef-47a0-a974-fc2cfc019916!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-149709 -n functional-149709
helpers_test.go:261: (dbg) Run:  kubectl --context functional-149709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-7rqk9 nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-149709 describe pod busybox-mount mysql-58ccfd96bb-7rqk9 nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-149709 describe pod busybox-mount mysql-58ccfd96bb-7rqk9 nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-149709/192.168.49.2
	Start Time:       Tue, 11 Feb 2025 02:10:06 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://f8f2ca9ddefaeaed7331f2a14be99cd808ccd7fd9629a1ce3d4f63f679f92746
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 11 Feb 2025 02:10:08 +0000
	      Finished:     Tue, 11 Feb 2025 02:10:08 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gnrsr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-gnrsr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m11s  default-scheduler  Successfully assigned default/busybox-mount to functional-149709
	  Normal  Pulling    3m12s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m10s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.175s (1.175s including waiting). Image size: 4631262 bytes.
	  Normal  Created    3m10s  kubelet            Created container: mount-munger
	  Normal  Started    3m10s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-7rqk9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-149709/192.168.49.2
	Start Time:       Tue, 11 Feb 2025 02:10:17 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vx9ff (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vx9ff:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m                  default-scheduler  Successfully assigned default/mysql-58ccfd96bb-7rqk9 to functional-149709
	  Warning  Failed     60s                 kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     60s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    60s                 kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     60s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    48s (x2 over 3m1s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-149709/192.168.49.2
	Start Time:       Tue, 11 Feb 2025 02:10:15 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6tlxm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6tlxm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m2s                default-scheduler  Successfully assigned default/nginx-svc to functional-149709
	  Warning  Failed     2m2s                kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     30s (x2 over 2m2s)  kubelet            Error: ErrImagePull
	  Warning  Failed     30s                 kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    15s (x2 over 2m1s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     15s (x2 over 2m1s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    3s (x3 over 3m3s)   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-149709/192.168.49.2
	Start Time:       Tue, 11 Feb 2025 02:10:15 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8wfk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-x8wfk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m2s                default-scheduler  Successfully assigned default/sp-pod to functional-149709
	  Warning  Failed     91s                 kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     91s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    90s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     90s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    78s (x2 over 3m3s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
E0211 02:13:53.188663   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.16s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-149709 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-7rqk9" [5c47c429-047e-4f75-8996-13d3d2320b17] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1816: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1816: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-149709 -n functional-149709
functional_test.go:1816: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-02-11 02:20:17.638492711 +0000 UTC m=+1116.843238864
functional_test.go:1816: (dbg) Run:  kubectl --context functional-149709 describe po mysql-58ccfd96bb-7rqk9 -n default
functional_test.go:1816: (dbg) kubectl --context functional-149709 describe po mysql-58ccfd96bb-7rqk9 -n default:
Name:             mysql-58ccfd96bb-7rqk9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-149709/192.168.49.2
Start Time:       Tue, 11 Feb 2025 02:10:17 +0000
Labels:           app=mysql
pod-template-hash=58ccfd96bb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-58ccfd96bb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vx9ff (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vx9ff:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-58ccfd96bb-7rqk9 to functional-149709
Warning  Failed     4m11s                 kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    72s (x5 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     39s (x4 over 7m59s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     39s (x5 over 7m59s)   kubelet            Error: ErrImagePull
Normal   BackOff    11s (x12 over 7m59s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     11s (x12 over 7m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1816: (dbg) Run:  kubectl --context functional-149709 logs mysql-58ccfd96bb-7rqk9 -n default
functional_test.go:1816: (dbg) Non-zero exit: kubectl --context functional-149709 logs mysql-58ccfd96bb-7rqk9 -n default: exit status 1 (70.780894ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-58ccfd96bb-7rqk9" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1816: kubectl --context functional-149709 logs mysql-58ccfd96bb-7rqk9 -n default: exit status 1
functional_test.go:1818: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-149709
helpers_test.go:235: (dbg) docker inspect functional-149709:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142",
	        "Created": "2025-02-11T02:07:56.49585572Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 42905,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-11T02:07:56.603418102Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a",
	        "ResolvConfPath": "/var/lib/docker/containers/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142/hostname",
	        "HostsPath": "/var/lib/docker/containers/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142/hosts",
	        "LogPath": "/var/lib/docker/containers/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142-json.log",
	        "Name": "/functional-149709",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-149709:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-149709",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/774b0ac5a1eed4992955f93033c34e95b369681db7e0c238c1d7dcf09dff4833-init/diff:/var/lib/docker/overlay2/de28131002c1cf3ac1375d9db63a3e00d2a843930d2c723033b62dc11010311c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/774b0ac5a1eed4992955f93033c34e95b369681db7e0c238c1d7dcf09dff4833/merged",
	                "UpperDir": "/var/lib/docker/overlay2/774b0ac5a1eed4992955f93033c34e95b369681db7e0c238c1d7dcf09dff4833/diff",
	                "WorkDir": "/var/lib/docker/overlay2/774b0ac5a1eed4992955f93033c34e95b369681db7e0c238c1d7dcf09dff4833/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-149709",
	                "Source": "/var/lib/docker/volumes/functional-149709/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-149709",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-149709",
	                "name.minikube.sigs.k8s.io": "functional-149709",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "51df9668d661fb8d3aac379d46caad7369396197e86e07065f479acae8fa8c0d",
	            "SandboxKey": "/var/run/docker/netns/51df9668d661",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-149709": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2645b0c158dde57d083e408b613f3f93258649b1b014db7cd084427714b28b7b",
	                    "EndpointID": "0750791058d3fe9c90e4e46f52a3a3f02c0e80dc775b76ccc164c34a91af52e7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-149709",
	                        "07de83934a33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-149709 -n functional-149709
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-149709 logs -n 25: (1.393439191s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-149709 ssh findmnt         | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | -T /mount3                            |                   |         |         |                     |                     |
	| service        | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | service hello-node --url              |                   |         |         |                     |                     |
	|                | --format={{.IP}}                      |                   |         |         |                     |                     |
	| mount          | -p functional-149709                  | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC |                     |
	|                | --kill=true                           |                   |         |         |                     |                     |
	| addons         | functional-149709 addons list         | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	| tunnel         | functional-149709 tunnel              | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC |                     |
	|                | --alsologtostderr                     |                   |         |         |                     |                     |
	| service        | functional-149709 service             | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | hello-node --url                      |                   |         |         |                     |                     |
	| addons         | functional-149709 addons list         | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | -o json                               |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh sudo cat        | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC |                     |
	|                | /etc/ssl/certs/19028.pem              |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh sudo cat        | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | /usr/share/ca-certificates/19028.pem  |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh sudo cat        | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | /etc/ssl/certs/51391683.0             |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh sudo cat        | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | /etc/ssl/certs/190282.pem             |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh sudo cat        | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | /usr/share/ca-certificates/190282.pem |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh sudo cat        | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0             |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh sudo cat        | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | /etc/test/nested/copy/19028/hosts     |                   |         |         |                     |                     |
	| service        | functional-149709 service             | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | hello-node-connect --url              |                   |         |         |                     |                     |
	| image          | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | image ls --format short               |                   |         |         |                     |                     |
	|                | --alsologtostderr                     |                   |         |         |                     |                     |
	| ssh            | functional-149709 ssh pgrep           | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC |                     |
	|                | buildkitd                             |                   |         |         |                     |                     |
	| image          | functional-149709 image build -t      | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | localhost/my-image:functional-149709  |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr      |                   |         |         |                     |                     |
	| image          | functional-149709 image ls            | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	| image          | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | image ls --format yaml                |                   |         |         |                     |                     |
	|                | --alsologtostderr                     |                   |         |         |                     |                     |
	| image          | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | image ls --format json                |                   |         |         |                     |                     |
	|                | --alsologtostderr                     |                   |         |         |                     |                     |
	| image          | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | image ls --format table               |                   |         |         |                     |                     |
	|                | --alsologtostderr                     |                   |         |         |                     |                     |
	| update-context | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | update-context                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                |                   |         |         |                     |                     |
	| update-context | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | update-context                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                |                   |         |         |                     |                     |
	| update-context | functional-149709                     | functional-149709 | jenkins | v1.35.0 | 11 Feb 25 02:10 UTC | 11 Feb 25 02:10 UTC |
	|                | update-context                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                |                   |         |         |                     |                     |
	|----------------|---------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/11 02:10:05
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0211 02:10:05.320272   54278 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:10:05.320648   54278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:10:05.320661   54278 out.go:358] Setting ErrFile to fd 2...
	I0211 02:10:05.320669   54278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:10:05.320988   54278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	I0211 02:10:05.321699   54278 out.go:352] Setting JSON to false
	I0211 02:10:05.323071   54278 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3154,"bootTime":1739236651,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:10:05.323204   54278 start.go:139] virtualization: kvm guest
	I0211 02:10:05.325736   54278 out.go:177] * [functional-149709] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 02:10:05.327477   54278 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 02:10:05.327481   54278 notify.go:220] Checking for updates...
	I0211 02:10:05.329056   54278 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:10:05.330527   54278 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	I0211 02:10:05.331831   54278 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	I0211 02:10:05.333144   54278 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 02:10:05.334339   54278 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 02:10:05.307212   54262 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:10:05.307858   54262 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:10:05.349332   54262 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0211 02:10:05.349478   54262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:10:05.413532   54262 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-11 02:10:05.40286377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:10:05.413681   54262 docker.go:318] overlay module found
	I0211 02:10:05.416347   54262 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0211 02:10:05.417751   54262 start.go:297] selected driver: docker
	I0211 02:10:05.417781   54262 start.go:901] validating driver "docker" against &{Name:functional-149709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-149709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:10:05.417911   54262 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 02:10:05.420732   54262 out.go:201] 
	W0211 02:10:05.422136   54262 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0211 02:10:05.423576   54262 out.go:201] 
	I0211 02:10:05.335902   54278 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:10:05.336489   54278 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:10:05.374932   54278 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0211 02:10:05.375033   54278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:10:05.450688   54278 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-11 02:10:05.428519969 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:10:05.450832   54278 docker.go:318] overlay module found
	I0211 02:10:05.453001   54278 out.go:177] * Using the docker driver based on existing profile
	I0211 02:10:05.454582   54278 start.go:297] selected driver: docker
	I0211 02:10:05.454596   54278 start.go:901] validating driver "docker" against &{Name:functional-149709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-149709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:10:05.454697   54278 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 02:10:05.454799   54278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:10:05.526387   54278 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-11 02:10:05.512545396 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:10:05.527258   54278 cni.go:84] Creating CNI manager for ""
	I0211 02:10:05.527339   54278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0211 02:10:05.527405   54278 start.go:340] cluster config:
	{Name:functional-149709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-149709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:10:05.529484   54278 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Feb 11 02:18:42 functional-149709 crio[5428]: time="2025-02-11 02:18:42.849232883Z" level=info msg="Image docker.io/mysql:5.7 not found" id=b0946815-783a-4113-8004-e5901a1c943f name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:18:53 functional-149709 crio[5428]: time="2025-02-11 02:18:53.849439367Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=0441d693-8bca-41c8-957a-9311ecfbed45 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:18:53 functional-149709 crio[5428]: time="2025-02-11 02:18:53.849528694Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=8b488080-0930-43ba-b48e-f2129fc72a85 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:18:53 functional-149709 crio[5428]: time="2025-02-11 02:18:53.849741825Z" level=info msg="Image docker.io/mysql:5.7 not found" id=8b488080-0930-43ba-b48e-f2129fc72a85 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:18:53 functional-149709 crio[5428]: time="2025-02-11 02:18:53.849814996Z" level=info msg="Image docker.io/nginx:alpine not found" id=0441d693-8bca-41c8-957a-9311ecfbed45 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:19:05 functional-149709 crio[5428]: time="2025-02-11 02:19:05.849096505Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=3638d9c6-a2a7-4314-99c0-3bfac8a2b129 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:19:05 functional-149709 crio[5428]: time="2025-02-11 02:19:05.849331052Z" level=info msg="Image docker.io/mysql:5.7 not found" id=3638d9c6-a2a7-4314-99c0-3bfac8a2b129 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:19:07 functional-149709 crio[5428]: time="2025-02-11 02:19:07.486893651Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=65575880-9881-4aba-b01c-ec409522d54d name=/runtime.v1.ImageService/PullImage
	Feb 11 02:19:07 functional-149709 crio[5428]: time="2025-02-11 02:19:07.488049089Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Feb 11 02:19:08 functional-149709 crio[5428]: time="2025-02-11 02:19:08.849407072Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=74a83570-5b8d-4c86-a92a-53ded0be7082 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:19:08 functional-149709 crio[5428]: time="2025-02-11 02:19:08.849634995Z" level=info msg="Image docker.io/nginx:alpine not found" id=74a83570-5b8d-4c86-a92a-53ded0be7082 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:19:19 functional-149709 crio[5428]: time="2025-02-11 02:19:19.849660157Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=beb71afa-2b1d-40fd-9ec3-f094fbc54093 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:19:19 functional-149709 crio[5428]: time="2025-02-11 02:19:19.849896568Z" level=info msg="Image docker.io/nginx:alpine not found" id=beb71afa-2b1d-40fd-9ec3-f094fbc54093 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:19:31 functional-149709 crio[5428]: time="2025-02-11 02:19:31.849626504Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=933d9e73-33b7-47db-b949-bbb50c858bb2 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:19:31 functional-149709 crio[5428]: time="2025-02-11 02:19:31.849856820Z" level=info msg="Image docker.io/nginx:alpine not found" id=933d9e73-33b7-47db-b949-bbb50c858bb2 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:19:44 functional-149709 crio[5428]: time="2025-02-11 02:19:44.849492463Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=25baa6cc-70f1-468c-a77f-e4265c896f4b name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:19:44 functional-149709 crio[5428]: time="2025-02-11 02:19:44.849778888Z" level=info msg="Image docker.io/nginx:alpine not found" id=25baa6cc-70f1-468c-a77f-e4265c896f4b name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:19:53 functional-149709 crio[5428]: time="2025-02-11 02:19:53.849371844Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=11426c1b-81a5-41a3-b7c1-d5d2ae8e52fd name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:19:53 functional-149709 crio[5428]: time="2025-02-11 02:19:53.849649139Z" level=info msg="Image docker.io/mysql:5.7 not found" id=11426c1b-81a5-41a3-b7c1-d5d2ae8e52fd name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:19:57 functional-149709 crio[5428]: time="2025-02-11 02:19:57.849174625Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=8d5d76eb-a7dd-4576-82e0-b4f09e001dae name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:19:57 functional-149709 crio[5428]: time="2025-02-11 02:19:57.849415748Z" level=info msg="Image docker.io/nginx:alpine not found" id=8d5d76eb-a7dd-4576-82e0-b4f09e001dae name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:20:06 functional-149709 crio[5428]: time="2025-02-11 02:20:06.850643785Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=b3c2b6c8-5085-46f1-ab46-2a79a7967c72 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:20:06 functional-149709 crio[5428]: time="2025-02-11 02:20:06.850860559Z" level=info msg="Image docker.io/mysql:5.7 not found" id=b3c2b6c8-5085-46f1-ab46-2a79a7967c72 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:20:08 functional-149709 crio[5428]: time="2025-02-11 02:20:08.849519127Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=ff360844-f658-4837-94c7-68ffa9b30d14 name=/runtime.v1.ImageService/ImageStatus
	Feb 11 02:20:08 functional-149709 crio[5428]: time="2025-02-11 02:20:08.849777302Z" level=info msg="Image docker.io/nginx:alpine not found" id=ff360844-f658-4837-94c7-68ffa9b30d14 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	7e7f8ba305b43       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 10 minutes ago      Running             echoserver                  0                   fffa070ac638f       hello-node-connect-58f9cf68d8-bvxj9
	67f64cc560215       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   95bec25c3639a       dashboard-metrics-scraper-5d59dccf9b-r64gk
	443f917895108       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         10 minutes ago      Running             kubernetes-dashboard        0                   60294ac515e46       kubernetes-dashboard-7779f9b69b-hhfxn
	f8f2ca9ddefae       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   51933d22a29f8       busybox-mount
	347f3acd039f3       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   46b85f3d5b73f       hello-node-fcfd88b6f-fn66v
	ee4aca1380c7a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 10 minutes ago      Running             coredns                     3                   3332ca8daec6e       coredns-668d6bf9bc-dhfq8
	927003a362441       d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56                                                 10 minutes ago      Running             kindnet-cni                 3                   58081c3fc053a       kindnet-s67cs
	af88c17c1d865       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         3                   a89530c237133       storage-provisioner
	b1489888dc245       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                 10 minutes ago      Running             kube-proxy                  3                   93ca34f2cbc3b       kube-proxy-z65nc
	724750e44ce8c       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                 10 minutes ago      Running             kube-apiserver              0                   91cb17a4d5f3e       kube-apiserver-functional-149709
	b2919d4a128dd       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                 10 minutes ago      Running             kube-scheduler              3                   48c711a00a3da       kube-scheduler-functional-149709
	a81f56da2835a       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                 10 minutes ago      Running             kube-controller-manager     3                   9a2d28d996f60       kube-controller-manager-functional-149709
	13f6f2d25afc4       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 10 minutes ago      Running             etcd                        3                   e5bac45762607       etcd-functional-149709
	84de15df033ca       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 11 minutes ago      Exited              coredns                     2                   3332ca8daec6e       coredns-668d6bf9bc-dhfq8
	ea9cf7d369c96       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         2                   a89530c237133       storage-provisioner
	ae4d3eef6bbe1       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                 11 minutes ago      Exited              etcd                        2                   e5bac45762607       etcd-functional-149709
	c821bc7e1ee3c       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                 11 minutes ago      Exited              kube-scheduler              2                   48c711a00a3da       kube-scheduler-functional-149709
	db71bc29b9056       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                 11 minutes ago      Exited              kube-proxy                  2                   93ca34f2cbc3b       kube-proxy-z65nc
	55bc471731df9       d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56                                                 11 minutes ago      Exited              kindnet-cni                 2                   58081c3fc053a       kindnet-s67cs
	c93bcaac3d750       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                 11 minutes ago      Exited              kube-controller-manager     2                   9a2d28d996f60       kube-controller-manager-functional-149709
	
	
	==> coredns [84de15df033cad9be45d1fb4f1bb8202419d7b27eebbda814ad9a51770b7d1ad] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40915 - 41543 "HINFO IN 7172967796518925767.611749124021480384. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.030138844s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ee4aca1380c7a15be8adde30e48d3fa13cbce13d70087f4b8018e38ec6f7d59d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51854 - 32205 "HINFO IN 5883833411052480777.7266958046914092204. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027584315s
	
	
	==> describe nodes <==
	Name:               functional-149709
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-149709
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e91f70b9b442caa4bec80b031add390ac34d321
	                    minikube.k8s.io/name=functional-149709
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_11T02_08_13_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 11 Feb 2025 02:08:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-149709
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 11 Feb 2025 02:20:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 11 Feb 2025 02:19:48 +0000   Tue, 11 Feb 2025 02:08:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 11 Feb 2025 02:19:48 +0000   Tue, 11 Feb 2025 02:08:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 11 Feb 2025 02:19:48 +0000   Tue, 11 Feb 2025 02:08:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 11 Feb 2025 02:19:48 +0000   Tue, 11 Feb 2025 02:08:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-149709
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 24da52d556764f96a708105f328eb55b
	  System UUID:                92230bdf-e911-4103-b800-1f0e670ce984
	  Boot ID:                    144975d8-f0ab-4312-b95d-86c41201d6b3
	  Kernel Version:             5.15.0-1075-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-bvxj9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-fcfd88b6f-fn66v                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-58ccfd96bb-7rqk9                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-668d6bf9bc-dhfq8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-149709                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-s67cs                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-149709              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-149709     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-z65nc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-149709              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-r64gk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-hhfxn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-149709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-149709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-149709 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-149709 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-149709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-149709 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-149709 event: Registered Node functional-149709 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-149709 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-149709 event: Registered Node functional-149709 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-149709 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-149709 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-149709 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-149709 event: Registered Node functional-149709 in Controller
	
	
	==> dmesg <==
	[  +0.635975] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.022896] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.252830] kauditd_printk_skb: 46 callbacks suppressed
	[Feb11 02:04] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[  +1.011720] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[  +2.015838] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[  +4.163567] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[  +8.187242] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[Feb11 02:05] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[ +33.280901] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 02 4e 43 eb 47 eb 6a 39 83 40 1f 75 08 00
	[Feb11 02:10] FS-Cache: Duplicate cookie detected
	[  +0.004731] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006748] FS-Cache: O-cookie d=0000000072d28f37{9P.session} n=000000007e783556
	[  +0.007525] FS-Cache: O-key=[10] '34323935363832373232'
	[  +0.005372] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006609] FS-Cache: N-cookie d=0000000072d28f37{9P.session} n=00000000b78876c7
	[  +0.007524] FS-Cache: N-key=[10] '34323935363832373232'
	[ +11.887533] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [13f6f2d25afc48314e920461dcf24b65df96a31eb763d5769ab5d42a96445da9] <==
	{"level":"info","ts":"2025-02-11T02:09:35.743285Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-11T02:09:35.744776Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-11T02:09:35.745052Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-11T02:09:35.745094Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-11T02:09:35.745213Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-11T02:09:35.745231Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-11T02:09:36.734523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-02-11T02:09:36.734627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-02-11T02:09:36.734662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-11T02:09:36.734685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2025-02-11T02:09:36.734698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-02-11T02:09:36.734709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2025-02-11T02:09:36.734726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-02-11T02:09:36.737356Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-149709 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-11T02:09:36.737370Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-11T02:09:36.737376Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-11T02:09:36.737580Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-11T02:09:36.737603Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-11T02:09:36.738207Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-11T02:09:36.738396Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-11T02:09:36.739045Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-11T02:09:36.739211Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-11T02:19:36.884797Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1158}
	{"level":"info","ts":"2025-02-11T02:19:36.897116Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1158,"took":"11.948056ms","hash":286078702,"current-db-size-bytes":4435968,"current-db-size":"4.4 MB","current-db-size-in-use-bytes":1904640,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-02-11T02:19:36.897159Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":286078702,"revision":1158,"compact-revision":-1}
	
	
	==> etcd [ae4d3eef6bbe171f428c3c666d75dc4dcaf7da5231e983d379d87ed49a826c80] <==
	{"level":"info","ts":"2025-02-11T02:08:56.231198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-11T02:08:56.231222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-02-11T02:08:56.231237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2025-02-11T02:08:56.231244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-11T02:08:56.231277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2025-02-11T02:08:56.231286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-11T02:08:56.232508Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-149709 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-11T02:08:56.232549Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-11T02:08:56.232584Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-11T02:08:56.232810Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-11T02:08:56.232842Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-11T02:08:56.233302Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-11T02:08:56.233534Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-11T02:08:56.233868Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-11T02:08:56.234599Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-11T02:09:18.169491Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-11T02:09:18.169573Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-149709","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2025-02-11T02:09:18.169682Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-11T02:09:18.169783Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-11T02:09:18.179760Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-11T02:09:18.179872Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-11T02:09:18.179964Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-02-11T02:09:18.182611Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-11T02:09:18.182700Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-11T02:09:18.182740Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-149709","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 02:20:18 up  1:02,  0 users,  load average: 0.16, 0.24, 0.29
	Linux functional-149709 5.15.0-1075-gcp #84~20.04.1-Ubuntu SMP Thu Jan 16 20:44:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [55bc471731df9c7b9890eb0f905cd6dc2cbd18872df14b110fd858aee525f6e7] <==
	I0211 02:08:54.225156       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0211 02:08:54.316602       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0211 02:08:54.316896       1 main.go:148] setting mtu 1500 for CNI 
	I0211 02:08:54.316964       1 main.go:178] kindnetd IP family: "ipv4"
	I0211 02:08:54.317002       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0211 02:08:54.716770       1 controller.go:361] Starting controller kube-network-policies
	I0211 02:08:54.718251       1 controller.go:365] Waiting for informer caches to sync
	I0211 02:08:54.718269       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0211 02:08:57.419191       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0211 02:08:57.419279       1 metrics.go:61] Registering metrics
	I0211 02:08:57.424776       1 controller.go:401] Syncing nftables rules
	I0211 02:09:04.717279       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:09:04.717365       1 main.go:301] handling current node
	I0211 02:09:14.717171       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:09:14.717243       1 main.go:301] handling current node
	
	
	==> kindnet [927003a36244161fac9082127320d90644d45a793d96e9b958594b349f4b8be6] <==
	I0211 02:18:09.816768       1 main.go:301] handling current node
	I0211 02:18:19.817326       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:18:19.817366       1 main.go:301] handling current node
	I0211 02:18:29.825343       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:18:29.825376       1 main.go:301] handling current node
	I0211 02:18:39.816948       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:18:39.816992       1 main.go:301] handling current node
	I0211 02:18:49.817263       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:18:49.817296       1 main.go:301] handling current node
	I0211 02:18:59.819624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:18:59.819657       1 main.go:301] handling current node
	I0211 02:19:09.821450       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:19:09.821483       1 main.go:301] handling current node
	I0211 02:19:19.817287       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:19:19.817343       1 main.go:301] handling current node
	I0211 02:19:29.817577       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:19:29.817610       1 main.go:301] handling current node
	I0211 02:19:39.817271       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:19:39.817312       1 main.go:301] handling current node
	I0211 02:19:49.817261       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:19:49.817310       1 main.go:301] handling current node
	I0211 02:19:59.817265       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:19:59.817299       1 main.go:301] handling current node
	I0211 02:20:09.818659       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0211 02:20:09.818694       1 main.go:301] handling current node
	
	
	==> kube-apiserver [724750e44ce8cae9513bd9c08e17de0e9d59da3cef9127a5d037e85aad308969] <==
	I0211 02:09:37.918436       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0211 02:09:37.918454       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0211 02:09:37.918772       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0211 02:09:37.918379       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0211 02:09:37.919285       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0211 02:09:37.921725       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0211 02:09:37.926318       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0211 02:09:37.926734       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0211 02:09:38.044957       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0211 02:09:38.792942       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0211 02:09:39.797125       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0211 02:09:39.934680       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0211 02:09:39.984590       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0211 02:09:39.989886       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0211 02:09:41.142559       1 controller.go:615] quota admission added evaluator for: endpoints
	I0211 02:09:41.243743       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0211 02:09:41.441990       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0211 02:09:59.195401       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.164.177"}
	I0211 02:10:03.301801       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.74.46"}
	I0211 02:10:07.567102       1 controller.go:615] quota admission added evaluator for: namespaces
	I0211 02:10:07.755621       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.48.94"}
	I0211 02:10:07.818475       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.109.79"}
	I0211 02:10:15.441505       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.231.165"}
	I0211 02:10:15.817604       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.252.244"}
	I0211 02:10:17.306946       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.248.57"}
	
	
	==> kube-controller-manager [a81f56da2835a15db3814607908ab32c9b08c5346dc59c0201aa8a547354616e] <==
	I0211 02:10:13.169822       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="98.053µs"
	I0211 02:10:15.173254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="7.528054ms"
	I0211 02:10:15.173353       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="55.829µs"
	I0211 02:10:15.710033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="11.649649ms"
	I0211 02:10:15.717905       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="7.820961ms"
	I0211 02:10:15.718093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="57.633µs"
	I0211 02:10:16.173661       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="5.290043ms"
	I0211 02:10:16.173763       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-58f9cf68d8" duration="53.809µs"
	I0211 02:10:17.357185       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="13.574549ms"
	I0211 02:10:17.362177       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="4.946891ms"
	I0211 02:10:17.362284       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="64.67µs"
	I0211 02:10:17.363702       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="77.12µs"
	I0211 02:10:39.186393       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-149709"
	I0211 02:12:18.419545       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="75.056µs"
	I0211 02:12:30.858473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="60.058µs"
	I0211 02:14:31.859087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="72.632µs"
	I0211 02:14:43.038648       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-149709"
	I0211 02:14:45.861040       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="50.836µs"
	I0211 02:16:18.859507       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="120.833µs"
	I0211 02:16:33.858334       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="61.59µs"
	I0211 02:17:52.858318       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="113.149µs"
	I0211 02:18:04.860740       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="79.566µs"
	I0211 02:19:48.673419       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-149709"
	I0211 02:19:53.858913       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="128.792µs"
	I0211 02:20:06.859829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="79.815µs"
	
	
	==> kube-controller-manager [c93bcaac3d7500b104deb63dc6718790d663f7a286c9babe2ba24074581f9b35] <==
	I0211 02:09:00.473768       1 shared_informer.go:320] Caches are synced for daemon sets
	I0211 02:09:00.473818       1 shared_informer.go:320] Caches are synced for namespace
	I0211 02:09:00.473849       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0211 02:09:00.473916       1 shared_informer.go:320] Caches are synced for disruption
	I0211 02:09:00.475090       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0211 02:09:00.475131       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0211 02:09:00.475156       1 shared_informer.go:320] Caches are synced for job
	I0211 02:09:00.475189       1 shared_informer.go:320] Caches are synced for ephemeral
	I0211 02:09:00.478956       1 shared_informer.go:320] Caches are synced for resource quota
	I0211 02:09:00.480070       1 shared_informer.go:320] Caches are synced for node
	I0211 02:09:00.480142       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0211 02:09:00.480157       1 shared_informer.go:320] Caches are synced for HPA
	I0211 02:09:00.480190       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0211 02:09:00.480201       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0211 02:09:00.480208       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0211 02:09:00.480260       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-149709"
	I0211 02:09:00.481032       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0211 02:09:00.493301       1 shared_informer.go:320] Caches are synced for garbage collector
	I0211 02:09:00.782830       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="307.64995ms"
	I0211 02:09:00.782956       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="80.187µs"
	I0211 02:09:04.362924       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-149709"
	I0211 02:09:14.546198       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-149709"
	I0211 02:09:14.877972       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="120.207µs"
	I0211 02:09:14.897982       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="7.680941ms"
	I0211 02:09:14.898075       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="57.865µs"
	
	
	==> kube-proxy [b1489888dc24549498bf1cc78fb07dc7f06dc42d60e2b39f651c2a9142215b93] <==
	I0211 02:09:39.257115       1 server_linux.go:66] "Using iptables proxy"
	I0211 02:09:39.399771       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0211 02:09:39.399833       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0211 02:09:39.419929       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0211 02:09:39.419990       1 server_linux.go:170] "Using iptables Proxier"
	I0211 02:09:39.421894       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0211 02:09:39.422247       1 server.go:497] "Version info" version="v1.32.1"
	I0211 02:09:39.422286       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0211 02:09:39.423640       1 config.go:199] "Starting service config controller"
	I0211 02:09:39.423682       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0211 02:09:39.423693       1 config.go:105] "Starting endpoint slice config controller"
	I0211 02:09:39.423713       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0211 02:09:39.423771       1 config.go:329] "Starting node config controller"
	I0211 02:09:39.423782       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0211 02:09:39.523916       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0211 02:09:39.523950       1 shared_informer.go:320] Caches are synced for service config
	I0211 02:09:39.523939       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [db71bc29b9056266eecb06c5d46a675313e7d2ee4ce05efabfd92321a335ad7a] <==
	I0211 02:08:54.246925       1 server_linux.go:66] "Using iptables proxy"
	E0211 02:08:57.227933       1 server.go:687] "Failed to retrieve node info" err="nodes \"functional-149709\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope"
	I0211 02:08:58.286204       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0211 02:08:58.286284       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0211 02:08:58.309313       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0211 02:08:58.309374       1 server_linux.go:170] "Using iptables Proxier"
	I0211 02:08:58.311629       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0211 02:08:58.312077       1 server.go:497] "Version info" version="v1.32.1"
	I0211 02:08:58.312097       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0211 02:08:58.314912       1 config.go:199] "Starting service config controller"
	I0211 02:08:58.314935       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0211 02:08:58.314964       1 config.go:105] "Starting endpoint slice config controller"
	I0211 02:08:58.314968       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0211 02:08:58.314986       1 config.go:329] "Starting node config controller"
	I0211 02:08:58.314989       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0211 02:08:58.415379       1 shared_informer.go:320] Caches are synced for node config
	I0211 02:08:58.415420       1 shared_informer.go:320] Caches are synced for service config
	I0211 02:08:58.415437       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b2919d4a128ddec1fedb004202b98aa018123cfa8d4391368f901a81a07abd64] <==
	I0211 02:09:36.457926       1 serving.go:386] Generated self-signed cert in-memory
	W0211 02:09:37.917294       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0211 02:09:37.917333       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0211 02:09:37.917345       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0211 02:09:37.917355       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0211 02:09:37.932425       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0211 02:09:37.932456       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0211 02:09:37.934891       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0211 02:09:37.934968       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0211 02:09:37.935197       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0211 02:09:37.935325       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0211 02:09:38.036079       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c821bc7e1ee3c1fe1d6b3b8d08f247cad0405eeb1de4bbbf133b5c65e28a3cce] <==
	I0211 02:08:54.964479       1 serving.go:386] Generated self-signed cert in-memory
	W0211 02:08:57.175522       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0211 02:08:57.175650       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0211 02:08:57.175691       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0211 02:08:57.175755       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0211 02:08:57.426672       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0211 02:08:57.426789       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0211 02:08:57.429835       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0211 02:08:57.429905       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0211 02:08:57.430009       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0211 02:08:57.430128       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0211 02:08:57.530563       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0211 02:09:18.171239       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0211 02:09:18.171322       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0211 02:09:18.171468       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 11 02:19:35 functional-149709 kubelet[5790]: E0211 02:19:35.065517    5790 manager.go:1116] Failed to create existing container: /docker/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142/crio-a89530c23713348363b0f58824dbcef83e38d2f7216ebeb1aea2f83eb2343b9b: Error finding container a89530c23713348363b0f58824dbcef83e38d2f7216ebeb1aea2f83eb2343b9b: Status 404 returned error can't find the container with id a89530c23713348363b0f58824dbcef83e38d2f7216ebeb1aea2f83eb2343b9b
	Feb 11 02:19:35 functional-149709 kubelet[5790]: E0211 02:19:35.065677    5790 manager.go:1116] Failed to create existing container: /docker/07de83934a33b9c110b3ee859b7785ba88f9fb48547674e9fbb35538a9446142/crio-48c711a00a3da06a462434b62162a6e1ac84c5536b414a33c1beb435db9ef0a5: Error finding container 48c711a00a3da06a462434b62162a6e1ac84c5536b414a33c1beb435db9ef0a5: Status 404 returned error can't find the container with id 48c711a00a3da06a462434b62162a6e1ac84c5536b414a33c1beb435db9ef0a5
	Feb 11 02:19:35 functional-149709 kubelet[5790]: E0211 02:19:35.065840    5790 manager.go:1116] Failed to create existing container: /crio-93ca34f2cbc3b8a17e30b3fd3810fc5a3cfa544a0f8b82d5bb843ba771e09add: Error finding container 93ca34f2cbc3b8a17e30b3fd3810fc5a3cfa544a0f8b82d5bb843ba771e09add: Status 404 returned error can't find the container with id 93ca34f2cbc3b8a17e30b3fd3810fc5a3cfa544a0f8b82d5bb843ba771e09add
	Feb 11 02:19:35 functional-149709 kubelet[5790]: E0211 02:19:35.202627    5790 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739240375202455435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:19:35 functional-149709 kubelet[5790]: E0211 02:19:35.202666    5790 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739240375202455435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:19:38 functional-149709 kubelet[5790]: E0211 02:19:38.103727    5790 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Feb 11 02:19:38 functional-149709 kubelet[5790]: E0211 02:19:38.103809    5790 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Feb 11 02:19:38 functional-149709 kubelet[5790]: E0211 02:19:38.103972    5790 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vx9ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-58ccfd96bb-7rqk9_default(5c47c429-047e-4f75-8996-13d3d2320b17): ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Feb 11 02:19:38 functional-149709 kubelet[5790]: E0211 02:19:38.105137    5790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-7rqk9" podUID="5c47c429-047e-4f75-8996-13d3d2320b17"
	Feb 11 02:19:44 functional-149709 kubelet[5790]: E0211 02:19:44.849429    5790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6ac8c511-1482-4312-b43d-188556df06b2"
	Feb 11 02:19:44 functional-149709 kubelet[5790]: E0211 02:19:44.850036    5790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6720f745-8fe7-4fa3-b048-17068b1b53da"
	Feb 11 02:19:45 functional-149709 kubelet[5790]: E0211 02:19:45.204013    5790 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739240385203853471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:19:45 functional-149709 kubelet[5790]: E0211 02:19:45.204046    5790 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739240385203853471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:19:53 functional-149709 kubelet[5790]: E0211 02:19:53.849918    5790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-7rqk9" podUID="5c47c429-047e-4f75-8996-13d3d2320b17"
	Feb 11 02:19:55 functional-149709 kubelet[5790]: E0211 02:19:55.205402    5790 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739240395205242991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:19:55 functional-149709 kubelet[5790]: E0211 02:19:55.205441    5790 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739240395205242991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:19:57 functional-149709 kubelet[5790]: E0211 02:19:57.848836    5790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6ac8c511-1482-4312-b43d-188556df06b2"
	Feb 11 02:19:57 functional-149709 kubelet[5790]: E0211 02:19:57.849684    5790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6720f745-8fe7-4fa3-b048-17068b1b53da"
	Feb 11 02:20:05 functional-149709 kubelet[5790]: E0211 02:20:05.206765    5790 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739240405206607898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:20:05 functional-149709 kubelet[5790]: E0211 02:20:05.206797    5790 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739240405206607898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:20:06 functional-149709 kubelet[5790]: E0211 02:20:06.851135    5790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-7rqk9" podUID="5c47c429-047e-4f75-8996-13d3d2320b17"
	Feb 11 02:20:08 functional-149709 kubelet[5790]: E0211 02:20:08.850076    5790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="6720f745-8fe7-4fa3-b048-17068b1b53da"
	Feb 11 02:20:10 functional-149709 kubelet[5790]: E0211 02:20:10.849724    5790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6ac8c511-1482-4312-b43d-188556df06b2"
	Feb 11 02:20:15 functional-149709 kubelet[5790]: E0211 02:20:15.208163    5790 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739240415207991205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:20:15 functional-149709 kubelet[5790]: E0211 02:20:15.208201    5790 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739240415207991205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236026,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [443f9178951085086e8a084dab2223084c1e9c24e79675132764e7675237edaa] <==
	2025/02/11 02:10:12 Using namespace: kubernetes-dashboard
	2025/02/11 02:10:12 Using in-cluster config to connect to apiserver
	2025/02/11 02:10:12 Using secret token for csrf signing
	2025/02/11 02:10:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/02/11 02:10:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/02/11 02:10:12 Successful initial request to the apiserver, version: v1.32.1
	2025/02/11 02:10:12 Generating JWE encryption key
	2025/02/11 02:10:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/02/11 02:10:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/02/11 02:10:12 Initializing JWE encryption key from synchronized object
	2025/02/11 02:10:12 Creating in-cluster Sidecar client
	2025/02/11 02:10:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/02/11 02:10:12 Serving insecurely on HTTP port: 9090
	2025/02/11 02:10:42 Successful request to sidecar
	2025/02/11 02:10:12 Starting overwatch
	
	
	==> storage-provisioner [af88c17c1d865884cce3ed5ef47a3343d08fd32d984fa4603d9b3f55072a7194] <==
	I0211 02:09:39.220321       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0211 02:09:39.230587       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0211 02:09:39.230635       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0211 02:09:56.627365       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0211 02:09:56.627504       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-149709_815722e1-41d8-4d0d-9192-3989829f94b1!
	I0211 02:09:56.627505       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"257d8941-0b1d-4bde-b679-b11757715a03", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-149709_815722e1-41d8-4d0d-9192-3989829f94b1 became leader
	I0211 02:09:56.728134       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-149709_815722e1-41d8-4d0d-9192-3989829f94b1!
	I0211 02:10:15.273687       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0211 02:10:15.275014       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c22aa53e-00b0-4d6d-829c-5c40e426662e", APIVersion:"v1", ResourceVersion:"836", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0211 02:10:15.274634       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    bb47fe95-8afe-4c64-a347-0f7c7d4c022c 379 0 2025-02-11 02:08:18 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-02-11 02:08:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-c22aa53e-00b0-4d6d-829c-5c40e426662e &PersistentVolumeClaim{ObjectMeta:{myclaim  default  c22aa53e-00b0-4d6d-829c-5c40e426662e 836 0 2025-02-11 02:10:15 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-02-11 02:10:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-02-11 02:10:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0211 02:10:15.275349       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-c22aa53e-00b0-4d6d-829c-5c40e426662e" provisioned
	I0211 02:10:15.275421       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0211 02:10:15.275479       1 volume_store.go:212] Trying to save persistentvolume "pvc-c22aa53e-00b0-4d6d-829c-5c40e426662e"
	I0211 02:10:15.287239       1 volume_store.go:219] persistentvolume "pvc-c22aa53e-00b0-4d6d-829c-5c40e426662e" saved
	I0211 02:10:15.287571       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c22aa53e-00b0-4d6d-829c-5c40e426662e", APIVersion:"v1", ResourceVersion:"836", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-c22aa53e-00b0-4d6d-829c-5c40e426662e
	
	
	==> storage-provisioner [ea9cf7d369c969e5367cec8df8404f9b3f9d504cd681ca57ddc36cd4fae36be2] <==
	I0211 02:08:56.629101       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0211 02:08:57.427149       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0211 02:08:57.427239       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0211 02:09:14.825367       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0211 02:09:14.825479       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"257d8941-0b1d-4bde-b679-b11757715a03", APIVersion:"v1", ResourceVersion:"567", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-149709_aa0ff826-07ef-47a0-a974-fc2cfc019916 became leader
	I0211 02:09:14.825574       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-149709_aa0ff826-07ef-47a0-a974-fc2cfc019916!
	I0211 02:09:14.926152       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-149709_aa0ff826-07ef-47a0-a974-fc2cfc019916!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-149709 -n functional-149709
helpers_test.go:261: (dbg) Run:  kubectl --context functional-149709 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-7rqk9 nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-149709 describe pod busybox-mount mysql-58ccfd96bb-7rqk9 nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-149709 describe pod busybox-mount mysql-58ccfd96bb-7rqk9 nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-149709/192.168.49.2
	Start Time:       Tue, 11 Feb 2025 02:10:06 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://f8f2ca9ddefaeaed7331f2a14be99cd808ccd7fd9629a1ce3d4f63f679f92746
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 11 Feb 2025 02:10:08 +0000
	      Finished:     Tue, 11 Feb 2025 02:10:08 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gnrsr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-gnrsr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-149709
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.175s (1.175s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-7rqk9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-149709/192.168.49.2
	Start Time:       Tue, 11 Feb 2025 02:10:17 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vx9ff (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vx9ff:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/mysql-58ccfd96bb-7rqk9 to functional-149709
	  Warning  Failed     4m13s                kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    74s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     41s (x4 over 8m1s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     41s (x5 over 8m1s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    13s (x12 over 8m1s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     13s (x12 over 8m1s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-149709/192.168.49.2
	Start Time:       Tue, 11 Feb 2025 02:10:15 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6tlxm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6tlxm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/nginx-svc to functional-149709
	  Warning  Failed     3m42s (x3 over 7m31s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m19s (x5 over 10m)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     108s (x2 over 9m3s)    kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     108s (x5 over 9m3s)    kubelet            Error: ErrImagePull
	  Warning  Failed     35s (x16 over 9m2s)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    0s (x19 over 9m2s)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-149709/192.168.49.2
	Start Time:       Tue, 11 Feb 2025 02:10:15 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8wfk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-x8wfk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/sp-pod to functional-149709
	  Warning  Failed     6m30s                kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:088eea90c3d0a540ee5686e7d7471acbd4063b6e97eaf49b5e651665eb7f4dc7 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    103s (x5 over 10m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     72s (x4 over 8m32s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     72s (x5 over 8m32s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    9s (x15 over 8m31s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     9s (x15 over 8m31s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Non-zero exit: docker pull kicbase/echo-server:1.0: exit status 1 (633.669083ms)

                                                
                                                
-- stdout --
	1.0: Pulling from kicbase/echo-server

                                                
                                                
-- /stdout --
** stderr ** 
	toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:361: failed to setup test (pull image): exit status 1

                                                
                                                
-- stdout --
	1.0: Pulling from kicbase/echo-server

                                                
                                                
-- /stdout --
** stderr ** 
	toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 image load --daemon kicbase/echo-server:functional-149709 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 image ls
functional_test.go:463: expected "kicbase/echo-server:functional-149709" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 image load --daemon kicbase/echo-server:functional-149709 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 image ls
functional_test.go:463: expected "kicbase/echo-server:functional-149709" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:252: (dbg) Non-zero exit: docker pull kicbase/echo-server:latest: exit status 1 (432.276333ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:254: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 image save kicbase/echo-server:functional-149709 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:403: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:428: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0211 02:10:08.854942   55848 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:10:08.855122   55848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:10:08.855134   55848 out.go:358] Setting ErrFile to fd 2...
	I0211 02:10:08.855141   55848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:10:08.855331   55848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	I0211 02:10:08.855952   55848 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:10:08.856090   55848 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:10:08.856500   55848 cli_runner.go:164] Run: docker container inspect functional-149709 --format={{.State.Status}}
	I0211 02:10:08.874051   55848 ssh_runner.go:195] Run: systemctl --version
	I0211 02:10:08.874112   55848 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-149709
	I0211 02:10:08.891278   55848 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/functional-149709/id_rsa Username:docker}
	I0211 02:10:08.980644   55848 cache_images.go:289] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W0211 02:10:08.980712   55848 cache_images.go:253] Failed to load cached images for "functional-149709": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I0211 02:10:08.980739   55848 cache_images.go:265] failed pushing to: functional-149709

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-149709
functional_test.go:436: (dbg) Non-zero exit: docker rmi kicbase/echo-server:functional-149709: exit status 1 (16.84611ms)

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-149709

                                                
                                                
** /stderr **
functional_test.go:438: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-149709

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-149709 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6720f745-8fe7-4fa3-b048-17068b1b53da] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-149709 -n functional-149709
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-02-11 02:14:15.739313334 +0000 UTC m=+754.944059485
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-149709 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-149709 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-149709/192.168.49.2
Start Time:       Tue, 11 Feb 2025 02:10:15 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:  10.244.0.8
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6tlxm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6tlxm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-149709
Warning  Failed     2m59s                kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     87s (x2 over 2m59s)  kubelet            Error: ErrImagePull
Warning  Failed     87s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    72s (x2 over 2m58s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     72s (x2 over 2m58s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    60s (x3 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-149709 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-149709 logs nginx-svc -n default: exit status 1 (62.604228ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-149709 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (95.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0211 02:14:15.864078   19028 retry.go:31] will retry after 3.162947376s: Temporary Error: Get "http:": http: no Host in request URL
I0211 02:14:19.028040   19028 retry.go:31] will retry after 6.372131815s: Temporary Error: Get "http:": http: no Host in request URL
E0211 02:14:20.891454   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
I0211 02:14:25.400359   19028 retry.go:31] will retry after 6.027008133s: Temporary Error: Get "http:": http: no Host in request URL
I0211 02:14:31.428196   19028 retry.go:31] will retry after 7.14155756s: Temporary Error: Get "http:": http: no Host in request URL
I0211 02:14:38.570147   19028 retry.go:31] will retry after 16.638642969s: Temporary Error: Get "http:": http: no Host in request URL
I0211 02:14:55.209421   19028 retry.go:31] will retry after 28.611616507s: Temporary Error: Get "http:": http: no Host in request URL
I0211 02:15:23.821387   19028 retry.go:31] will retry after 27.660734003s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-149709 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
nginx-svc   LoadBalancer   10.101.231.165   10.101.231.165   80:31214/TCP   5m36s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (95.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (1638.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-065740 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-065740 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: exit status 80 (27m18.7074835s)

                                                
                                                
-- stdout --
	* [calico-065740] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-065740" primary control-plane node in "calico-065740" cluster
	* Pulling base image v0.0.46 ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	* Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:54:02.706312  334684 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:54:02.706699  334684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:54:02.706713  334684 out.go:358] Setting ErrFile to fd 2...
	I0211 02:54:02.706718  334684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:54:02.706943  334684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	I0211 02:54:02.707542  334684 out.go:352] Setting JSON to false
	I0211 02:54:02.709019  334684 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5792,"bootTime":1739236651,"procs":491,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:54:02.709111  334684 start.go:139] virtualization: kvm guest
	I0211 02:54:02.711480  334684 out.go:177] * [calico-065740] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 02:54:02.713132  334684 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 02:54:02.713132  334684 notify.go:220] Checking for updates...
	I0211 02:54:02.714820  334684 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:54:02.716277  334684 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	I0211 02:54:02.717801  334684 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	I0211 02:54:02.719272  334684 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 02:54:02.720667  334684 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 02:54:02.722685  334684 config.go:182] Loaded profile config "bridge-065740": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:54:02.722811  334684 config.go:182] Loaded profile config "default-k8s-diff-port-289377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:54:02.722917  334684 config.go:182] Loaded profile config "enable-default-cni-065740": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:54:02.723054  334684 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:54:02.749968  334684 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0211 02:54:02.750068  334684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:54:02.809495  334684 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:80 SystemTime:2025-02-11 02:54:02.795066987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:54:02.809596  334684 docker.go:318] overlay module found
	I0211 02:54:02.811867  334684 out.go:177] * Using the docker driver based on user configuration
	I0211 02:54:02.813294  334684 start.go:297] selected driver: docker
	I0211 02:54:02.813308  334684 start.go:901] validating driver "docker" against <nil>
	I0211 02:54:02.813318  334684 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 02:54:02.814205  334684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:54:02.868538  334684 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:80 SystemTime:2025-02-11 02:54:02.856677734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:54:02.868773  334684 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0211 02:54:02.869096  334684 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 02:54:02.871161  334684 out.go:177] * Using Docker driver with root privileges
	I0211 02:54:02.872517  334684 cni.go:84] Creating CNI manager for "calico"
	I0211 02:54:02.872541  334684 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0211 02:54:02.872627  334684 start.go:340] cluster config:
	{Name:calico-065740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:calico-065740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0211 02:54:02.874333  334684 out.go:177] * Starting "calico-065740" primary control-plane node in "calico-065740" cluster
	I0211 02:54:02.875989  334684 cache.go:121] Beginning downloading kic base image for docker with crio
	I0211 02:54:02.877261  334684 out.go:177] * Pulling base image v0.0.46 ...
	I0211 02:54:02.878514  334684 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 02:54:02.878573  334684 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0211 02:54:02.878588  334684 cache.go:56] Caching tarball of preloaded images
	I0211 02:54:02.878648  334684 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0211 02:54:02.878698  334684 preload.go:172] Found /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0211 02:54:02.878712  334684 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0211 02:54:02.878828  334684 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/config.json ...
	I0211 02:54:02.878852  334684 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/config.json: {Name:mkf39b2f63c8af56c47c4a3fa98864b3b79ad66d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:54:02.904221  334684 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0211 02:54:02.904243  334684 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0211 02:54:02.904264  334684 cache.go:230] Successfully downloaded all kic artifacts
	I0211 02:54:02.904302  334684 start.go:360] acquireMachinesLock for calico-065740: {Name:mk19061fa77c4ebce098bdb1087a7ffd34584c75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 02:54:02.904407  334684 start.go:364] duration metric: took 84.426µs to acquireMachinesLock for "calico-065740"
	I0211 02:54:02.904434  334684 start.go:93] Provisioning new machine with config: &{Name:calico-065740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:calico-065740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 02:54:02.904528  334684 start.go:125] createHost starting for "" (driver="docker")
	I0211 02:54:02.906526  334684 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0211 02:54:02.906774  334684 start.go:159] libmachine.API.Create for "calico-065740" (driver="docker")
	I0211 02:54:02.906820  334684 client.go:168] LocalClient.Create starting
	I0211 02:54:02.906914  334684 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem
	I0211 02:54:02.906954  334684 main.go:141] libmachine: Decoding PEM data...
	I0211 02:54:02.906970  334684 main.go:141] libmachine: Parsing certificate...
	I0211 02:54:02.907033  334684 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20400-12240/.minikube/certs/cert.pem
	I0211 02:54:02.907063  334684 main.go:141] libmachine: Decoding PEM data...
	I0211 02:54:02.907078  334684 main.go:141] libmachine: Parsing certificate...
	I0211 02:54:02.907418  334684 cli_runner.go:164] Run: docker network inspect calico-065740 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0211 02:54:02.926193  334684 cli_runner.go:211] docker network inspect calico-065740 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0211 02:54:02.926262  334684 network_create.go:284] running [docker network inspect calico-065740] to gather additional debugging logs...
	I0211 02:54:02.926284  334684 cli_runner.go:164] Run: docker network inspect calico-065740
	W0211 02:54:02.946650  334684 cli_runner.go:211] docker network inspect calico-065740 returned with exit code 1
	I0211 02:54:02.946682  334684 network_create.go:287] error running [docker network inspect calico-065740]: docker network inspect calico-065740: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-065740 not found
	I0211 02:54:02.946709  334684 network_create.go:289] output of [docker network inspect calico-065740]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-065740 not found
	
	** /stderr **
	I0211 02:54:02.946817  334684 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0211 02:54:02.968214  334684 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-370a375e9ac7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:3f:cb:fd:76} reservation:<nil>}
	I0211 02:54:02.969102  334684 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-031d12f65b31 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:cb:84:c5:a8} reservation:<nil>}
	I0211 02:54:02.970020  334684 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-314624e1e4fb IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d3:cf:64:79} reservation:<nil>}
	I0211 02:54:02.970895  334684 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1e621cec17d4 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:ba:44:0c:d0} reservation:<nil>}
	I0211 02:54:02.971789  334684 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001edaab0}
	I0211 02:54:02.971813  334684 network_create.go:124] attempt to create docker network calico-065740 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0211 02:54:02.971857  334684 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-065740 calico-065740
	I0211 02:54:03.041228  334684 network_create.go:108] docker network calico-065740 192.168.85.0/24 created
	I0211 02:54:03.041260  334684 kic.go:121] calculated static IP "192.168.85.2" for the "calico-065740" container
	I0211 02:54:03.041321  334684 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0211 02:54:03.061681  334684 cli_runner.go:164] Run: docker volume create calico-065740 --label name.minikube.sigs.k8s.io=calico-065740 --label created_by.minikube.sigs.k8s.io=true
	I0211 02:54:03.081427  334684 oci.go:103] Successfully created a docker volume calico-065740
	I0211 02:54:03.081511  334684 cli_runner.go:164] Run: docker run --rm --name calico-065740-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-065740 --entrypoint /usr/bin/test -v calico-065740:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0211 02:54:05.871804  334684 cli_runner.go:217] Completed: docker run --rm --name calico-065740-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-065740 --entrypoint /usr/bin/test -v calico-065740:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib: (2.790258188s)
	I0211 02:54:05.871840  334684 oci.go:107] Successfully prepared a docker volume calico-065740
	I0211 02:54:05.871863  334684 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 02:54:05.871879  334684 kic.go:194] Starting extracting preloaded images to volume ...
	I0211 02:54:05.871929  334684 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-065740:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0211 02:54:09.982802  334684 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-065740:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.110823358s)
	I0211 02:54:09.982839  334684 kic.go:203] duration metric: took 4.110955046s to extract preloaded images to volume ...
	W0211 02:54:09.982984  334684 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0211 02:54:09.983118  334684 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0211 02:54:10.035762  334684 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-065740 --name calico-065740 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-065740 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-065740 --network calico-065740 --ip 192.168.85.2 --volume calico-065740:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0211 02:54:10.421429  334684 cli_runner.go:164] Run: docker container inspect calico-065740 --format={{.State.Running}}
	I0211 02:54:10.444046  334684 cli_runner.go:164] Run: docker container inspect calico-065740 --format={{.State.Status}}
	I0211 02:54:10.468333  334684 cli_runner.go:164] Run: docker exec calico-065740 stat /var/lib/dpkg/alternatives/iptables
	I0211 02:54:10.518403  334684 oci.go:144] the created container "calico-065740" has a running status.
	I0211 02:54:10.518433  334684 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20400-12240/.minikube/machines/calico-065740/id_rsa...
	I0211 02:54:10.629294  334684 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20400-12240/.minikube/machines/calico-065740/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0211 02:54:10.652710  334684 cli_runner.go:164] Run: docker container inspect calico-065740 --format={{.State.Status}}
	I0211 02:54:10.673652  334684 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0211 02:54:10.673673  334684 kic_runner.go:114] Args: [docker exec --privileged calico-065740 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0211 02:54:10.722437  334684 cli_runner.go:164] Run: docker container inspect calico-065740 --format={{.State.Status}}
	I0211 02:54:10.743868  334684 machine.go:93] provisionDockerMachine start ...
	I0211 02:54:10.743968  334684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-065740
	I0211 02:54:10.778525  334684 main.go:141] libmachine: Using SSH client type: native
	I0211 02:54:10.778927  334684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I0211 02:54:10.778952  334684 main.go:141] libmachine: About to run SSH command:
	hostname
	I0211 02:54:10.779737  334684 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49244->127.0.0.1:33124: read: connection reset by peer
	I0211 02:54:13.923659  334684 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-065740
	
	I0211 02:54:13.923687  334684 ubuntu.go:169] provisioning hostname "calico-065740"
	I0211 02:54:13.923756  334684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-065740
	I0211 02:54:13.942813  334684 main.go:141] libmachine: Using SSH client type: native
	I0211 02:54:13.943025  334684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I0211 02:54:13.943043  334684 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-065740 && echo "calico-065740" | sudo tee /etc/hostname
	I0211 02:54:14.096675  334684 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-065740
	
	I0211 02:54:14.096764  334684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-065740
	I0211 02:54:14.118925  334684 main.go:141] libmachine: Using SSH client type: native
	I0211 02:54:14.119149  334684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I0211 02:54:14.119168  334684 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-065740' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-065740/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-065740' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0211 02:54:14.252542  334684 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 02:54:14.252576  334684 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20400-12240/.minikube CaCertPath:/home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20400-12240/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20400-12240/.minikube}
	I0211 02:54:14.252612  334684 ubuntu.go:177] setting up certificates
	I0211 02:54:14.252624  334684 provision.go:84] configureAuth start
	I0211 02:54:14.252690  334684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-065740
	I0211 02:54:14.272238  334684 provision.go:143] copyHostCerts
	I0211 02:54:14.272306  334684 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12240/.minikube/ca.pem, removing ...
	I0211 02:54:14.272326  334684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.pem
	I0211 02:54:14.272416  334684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20400-12240/.minikube/ca.pem (1078 bytes)
	I0211 02:54:14.272524  334684 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12240/.minikube/cert.pem, removing ...
	I0211 02:54:14.272539  334684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12240/.minikube/cert.pem
	I0211 02:54:14.272576  334684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20400-12240/.minikube/cert.pem (1123 bytes)
	I0211 02:54:14.272661  334684 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12240/.minikube/key.pem, removing ...
	I0211 02:54:14.272673  334684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12240/.minikube/key.pem
	I0211 02:54:14.272710  334684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20400-12240/.minikube/key.pem (1675 bytes)
	I0211 02:54:14.272795  334684 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20400-12240/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca-key.pem org=jenkins.calico-065740 san=[127.0.0.1 192.168.85.2 calico-065740 localhost minikube]
	I0211 02:54:14.367244  334684 provision.go:177] copyRemoteCerts
	I0211 02:54:14.367310  334684 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0211 02:54:14.367357  334684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-065740
	I0211 02:54:14.387635  334684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/calico-065740/id_rsa Username:docker}
	I0211 02:54:14.488797  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0211 02:54:14.511558  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0211 02:54:14.537780  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0211 02:54:14.564683  334684 provision.go:87] duration metric: took 312.043281ms to configureAuth
	I0211 02:54:14.564717  334684 ubuntu.go:193] setting minikube options for container-runtime
	I0211 02:54:14.564882  334684 config.go:182] Loaded profile config "calico-065740": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:54:14.564968  334684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-065740
	I0211 02:54:14.588453  334684 main.go:141] libmachine: Using SSH client type: native
	I0211 02:54:14.588636  334684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I0211 02:54:14.588657  334684 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0211 02:54:14.814584  334684 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0211 02:54:14.814616  334684 machine.go:96] duration metric: took 4.070724385s to provisionDockerMachine
	I0211 02:54:14.814628  334684 client.go:171] duration metric: took 11.907797428s to LocalClient.Create
	I0211 02:54:14.814649  334684 start.go:167] duration metric: took 11.907875008s to libmachine.API.Create "calico-065740"
	I0211 02:54:14.814657  334684 start.go:293] postStartSetup for "calico-065740" (driver="docker")
	I0211 02:54:14.814670  334684 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0211 02:54:14.814739  334684 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0211 02:54:14.814786  334684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-065740
	I0211 02:54:14.837991  334684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/calico-065740/id_rsa Username:docker}
	I0211 02:54:14.938277  334684 ssh_runner.go:195] Run: cat /etc/os-release
	I0211 02:54:14.941979  334684 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0211 02:54:14.942013  334684 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0211 02:54:14.942021  334684 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0211 02:54:14.942027  334684 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0211 02:54:14.942037  334684 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12240/.minikube/addons for local assets ...
	I0211 02:54:14.942087  334684 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12240/.minikube/files for local assets ...
	I0211 02:54:14.942172  334684 filesync.go:149] local asset: /home/jenkins/minikube-integration/20400-12240/.minikube/files/etc/ssl/certs/190282.pem -> 190282.pem in /etc/ssl/certs
	I0211 02:54:14.942273  334684 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0211 02:54:14.951345  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/files/etc/ssl/certs/190282.pem --> /etc/ssl/certs/190282.pem (1708 bytes)
	I0211 02:54:14.978341  334684 start.go:296] duration metric: took 163.669011ms for postStartSetup
	I0211 02:54:14.978813  334684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-065740
	I0211 02:54:14.999023  334684 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/config.json ...
	I0211 02:54:14.999270  334684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0211 02:54:14.999310  334684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-065740
	I0211 02:54:15.020497  334684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/calico-065740/id_rsa Username:docker}
	I0211 02:54:15.108662  334684 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0211 02:54:15.113087  334684 start.go:128] duration metric: took 12.208543801s to createHost
	I0211 02:54:15.113113  334684 start.go:83] releasing machines lock for "calico-065740", held for 12.208693605s
	I0211 02:54:15.113182  334684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-065740
	I0211 02:54:15.134068  334684 ssh_runner.go:195] Run: cat /version.json
	I0211 02:54:15.134116  334684 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0211 02:54:15.134125  334684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-065740
	I0211 02:54:15.134189  334684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-065740
	I0211 02:54:15.155237  334684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/calico-065740/id_rsa Username:docker}
	I0211 02:54:15.157778  334684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/calico-065740/id_rsa Username:docker}
	I0211 02:54:15.332712  334684 ssh_runner.go:195] Run: systemctl --version
	I0211 02:54:15.337515  334684 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0211 02:54:15.477675  334684 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0211 02:54:15.482578  334684 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0211 02:54:15.503090  334684 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0211 02:54:15.503173  334684 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0211 02:54:15.539230  334684 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0211 02:54:15.539253  334684 start.go:495] detecting cgroup driver to use...
	I0211 02:54:15.539284  334684 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0211 02:54:15.539327  334684 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0211 02:54:15.561958  334684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0211 02:54:15.575141  334684 docker.go:217] disabling cri-docker service (if available) ...
	I0211 02:54:15.575266  334684 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0211 02:54:15.592758  334684 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0211 02:54:15.610583  334684 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0211 02:54:15.694083  334684 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0211 02:54:15.801828  334684 docker.go:233] disabling docker service ...
	I0211 02:54:15.801890  334684 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0211 02:54:15.822590  334684 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0211 02:54:15.836596  334684 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0211 02:54:15.929617  334684 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0211 02:54:16.012342  334684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0211 02:54:16.023664  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0211 02:54:16.040059  334684 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0211 02:54:16.040193  334684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:54:16.049462  334684 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0211 02:54:16.049528  334684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:54:16.059602  334684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:54:16.070482  334684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:54:16.081251  334684 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0211 02:54:16.090778  334684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:54:16.102476  334684 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:54:16.119003  334684 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:54:16.130219  334684 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0211 02:54:16.138474  334684 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0211 02:54:16.150198  334684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 02:54:16.238748  334684 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0211 02:54:16.353618  334684 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0211 02:54:16.353695  334684 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0211 02:54:16.358003  334684 start.go:563] Will wait 60s for crictl version
	I0211 02:54:16.358069  334684 ssh_runner.go:195] Run: which crictl
	I0211 02:54:16.361667  334684 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0211 02:54:16.409060  334684 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0211 02:54:16.409162  334684 ssh_runner.go:195] Run: crio --version
	I0211 02:54:16.461116  334684 ssh_runner.go:195] Run: crio --version
	I0211 02:54:16.509851  334684 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.24.6 ...
	I0211 02:54:16.511269  334684 cli_runner.go:164] Run: docker network inspect calico-065740 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0211 02:54:16.532503  334684 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0211 02:54:16.536715  334684 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 02:54:16.549300  334684 kubeadm.go:883] updating cluster {Name:calico-065740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:calico-065740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0211 02:54:16.549449  334684 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 02:54:16.549511  334684 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 02:54:16.619585  334684 crio.go:514] all images are preloaded for cri-o runtime.
	I0211 02:54:16.619610  334684 crio.go:433] Images already preloaded, skipping extraction
	I0211 02:54:16.619665  334684 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 02:54:16.655999  334684 crio.go:514] all images are preloaded for cri-o runtime.
	I0211 02:54:16.656023  334684 cache_images.go:84] Images are preloaded, skipping loading
	I0211 02:54:16.656032  334684 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.1 crio true true} ...
	I0211 02:54:16.656215  334684 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-065740 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:calico-065740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0211 02:54:16.656331  334684 ssh_runner.go:195] Run: crio config
	I0211 02:54:16.706914  334684 cni.go:84] Creating CNI manager for "calico"
	I0211 02:54:16.706951  334684 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0211 02:54:16.706981  334684 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-065740 NodeName:calico-065740 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0211 02:54:16.707161  334684 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-065740"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0211 02:54:16.707240  334684 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0211 02:54:16.717255  334684 binaries.go:44] Found k8s binaries, skipping transfer
	I0211 02:54:16.717309  334684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0211 02:54:16.727005  334684 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0211 02:54:16.747560  334684 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0211 02:54:16.767005  334684 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0211 02:54:16.785952  334684 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0211 02:54:16.789389  334684 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 02:54:16.800593  334684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 02:54:16.886234  334684 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 02:54:16.900961  334684 certs.go:68] Setting up /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740 for IP: 192.168.85.2
	I0211 02:54:16.900981  334684 certs.go:194] generating shared ca certs ...
	I0211 02:54:16.901001  334684 certs.go:226] acquiring lock for ca certs: {Name:mk01247a5e2f34c4793d43faa12fab98d68353d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:54:16.901174  334684 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.key
	I0211 02:54:16.901231  334684 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.key
	I0211 02:54:16.901245  334684 certs.go:256] generating profile certs ...
	I0211 02:54:16.901314  334684 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/client.key
	I0211 02:54:16.901331  334684 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/client.crt with IP's: []
	I0211 02:54:17.069581  334684 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/client.crt ...
	I0211 02:54:17.069608  334684 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/client.crt: {Name:mkcab70a8fe2264bad9a4daabc533c5de1dfd66d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:54:17.069814  334684 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/client.key ...
	I0211 02:54:17.069830  334684 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/client.key: {Name:mk070b200c1da7e5ac8225c6e8d13bf91fdc4e2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:54:17.069937  334684 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/apiserver.key.8037ace8
	I0211 02:54:17.069954  334684 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/apiserver.crt.8037ace8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0211 02:54:17.207770  334684 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/apiserver.crt.8037ace8 ...
	I0211 02:54:17.207799  334684 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/apiserver.crt.8037ace8: {Name:mk1aff4f86cee6ded6e3143615a66ed430ef1e04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:54:17.207973  334684 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/apiserver.key.8037ace8 ...
	I0211 02:54:17.207997  334684 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/apiserver.key.8037ace8: {Name:mkb1f9232d78452601e1c608ad3a949dd6efc31b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:54:17.208133  334684 certs.go:381] copying /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/apiserver.crt.8037ace8 -> /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/apiserver.crt
	I0211 02:54:17.208234  334684 certs.go:385] copying /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/apiserver.key.8037ace8 -> /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/apiserver.key
	I0211 02:54:17.208289  334684 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/proxy-client.key
	I0211 02:54:17.208304  334684 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/proxy-client.crt with IP's: []
	I0211 02:54:17.282811  334684 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/proxy-client.crt ...
	I0211 02:54:17.282841  334684 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/proxy-client.crt: {Name:mk6822e49040a43d063eaec753ac2e26a6556182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:54:17.283017  334684 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/proxy-client.key ...
	I0211 02:54:17.283033  334684 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/proxy-client.key: {Name:mk03264ebfc8f57dc4339b9b1b86d8166fc62e35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:54:17.283241  334684 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/19028.pem (1338 bytes)
	W0211 02:54:17.283293  334684 certs.go:480] ignoring /home/jenkins/minikube-integration/20400-12240/.minikube/certs/19028_empty.pem, impossibly tiny 0 bytes
	I0211 02:54:17.283318  334684 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca-key.pem (1679 bytes)
	I0211 02:54:17.283355  334684 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/ca.pem (1078 bytes)
	I0211 02:54:17.283397  334684 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/cert.pem (1123 bytes)
	I0211 02:54:17.283435  334684 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/certs/key.pem (1675 bytes)
	I0211 02:54:17.283489  334684 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12240/.minikube/files/etc/ssl/certs/190282.pem (1708 bytes)
	I0211 02:54:17.284227  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0211 02:54:17.307670  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0211 02:54:17.332732  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0211 02:54:17.361270  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0211 02:54:17.383508  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0211 02:54:17.406022  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0211 02:54:17.428828  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0211 02:54:17.452891  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/calico-065740/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0211 02:54:17.476473  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0211 02:54:17.498156  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/certs/19028.pem --> /usr/share/ca-certificates/19028.pem (1338 bytes)
	I0211 02:54:17.519889  334684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12240/.minikube/files/etc/ssl/certs/190282.pem --> /usr/share/ca-certificates/190282.pem (1708 bytes)
	I0211 02:54:17.542158  334684 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0211 02:54:17.559175  334684 ssh_runner.go:195] Run: openssl version
	I0211 02:54:17.564397  334684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0211 02:54:17.573040  334684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:54:17.576084  334684 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 11 02:02 /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:54:17.576155  334684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:54:17.583027  334684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0211 02:54:17.592252  334684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19028.pem && ln -fs /usr/share/ca-certificates/19028.pem /etc/ssl/certs/19028.pem"
	I0211 02:54:17.601127  334684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19028.pem
	I0211 02:54:17.604928  334684 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 11 02:07 /usr/share/ca-certificates/19028.pem
	I0211 02:54:17.604984  334684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19028.pem
	I0211 02:54:17.611685  334684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19028.pem /etc/ssl/certs/51391683.0"
	I0211 02:54:17.621197  334684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/190282.pem && ln -fs /usr/share/ca-certificates/190282.pem /etc/ssl/certs/190282.pem"
	I0211 02:54:17.630063  334684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/190282.pem
	I0211 02:54:17.633396  334684 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 11 02:07 /usr/share/ca-certificates/190282.pem
	I0211 02:54:17.633460  334684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/190282.pem
	I0211 02:54:17.640566  334684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/190282.pem /etc/ssl/certs/3ec20f2e.0"
	I0211 02:54:17.649831  334684 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0211 02:54:17.652971  334684 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0211 02:54:17.653046  334684 kubeadm.go:392] StartCluster: {Name:calico-065740 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:calico-065740 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:54:17.653114  334684 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0211 02:54:17.653166  334684 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 02:54:17.688768  334684 cri.go:89] found id: ""
	I0211 02:54:17.688833  334684 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0211 02:54:17.697441  334684 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0211 02:54:17.705704  334684 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0211 02:54:17.705764  334684 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 02:54:17.713678  334684 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 02:54:17.713697  334684 kubeadm.go:157] found existing configuration files:
	
	I0211 02:54:17.713745  334684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 02:54:17.721876  334684 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 02:54:17.721946  334684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 02:54:17.730266  334684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 02:54:17.738813  334684 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 02:54:17.738878  334684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 02:54:17.747026  334684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 02:54:17.755509  334684 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 02:54:17.755570  334684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 02:54:17.765382  334684 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 02:54:17.774398  334684 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 02:54:17.774470  334684 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 02:54:17.783489  334684 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0211 02:54:17.825265  334684 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0211 02:54:17.825341  334684 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 02:54:17.847919  334684 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0211 02:54:17.848008  334684 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-gcp
	I0211 02:54:17.848052  334684 kubeadm.go:310] OS: Linux
	I0211 02:54:17.848129  334684 kubeadm.go:310] CGROUPS_CPU: enabled
	I0211 02:54:17.848191  334684 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0211 02:54:17.848250  334684 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0211 02:54:17.848311  334684 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0211 02:54:17.848370  334684 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0211 02:54:17.848435  334684 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0211 02:54:17.848493  334684 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0211 02:54:17.848559  334684 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0211 02:54:17.848612  334684 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0211 02:54:17.921347  334684 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 02:54:17.921521  334684 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 02:54:17.921642  334684 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0211 02:54:17.931101  334684 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 02:54:17.934533  334684 out.go:235]   - Generating certificates and keys ...
	I0211 02:54:17.934648  334684 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 02:54:17.934728  334684 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 02:54:18.309442  334684 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0211 02:54:18.478251  334684 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0211 02:54:18.980503  334684 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0211 02:54:19.081744  334684 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0211 02:54:19.408507  334684 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0211 02:54:19.408644  334684 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-065740 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0211 02:54:19.557261  334684 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0211 02:54:19.557420  334684 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-065740 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0211 02:54:19.757238  334684 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0211 02:54:19.828525  334684 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0211 02:54:19.984903  334684 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0211 02:54:19.985017  334684 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 02:54:20.536208  334684 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 02:54:20.877273  334684 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0211 02:54:21.150624  334684 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 02:54:21.387063  334684 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 02:54:21.530238  334684 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 02:54:21.530919  334684 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 02:54:21.533950  334684 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 02:54:21.535743  334684 out.go:235]   - Booting up control plane ...
	I0211 02:54:21.535902  334684 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 02:54:21.536030  334684 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 02:54:21.537026  334684 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 02:54:21.548616  334684 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 02:54:21.555086  334684 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 02:54:21.555178  334684 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 02:54:21.640712  334684 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0211 02:54:21.640896  334684 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0211 02:54:22.142500  334684 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.881878ms
	I0211 02:54:22.142629  334684 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0211 02:54:27.145187  334684 kubeadm.go:310] [api-check] The API server is healthy after 5.002631955s
	I0211 02:54:27.157090  334684 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0211 02:54:27.167676  334684 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0211 02:54:27.186325  334684 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0211 02:54:27.186561  334684 kubeadm.go:310] [mark-control-plane] Marking the node calico-065740 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0211 02:54:27.193949  334684 kubeadm.go:310] [bootstrap-token] Using token: 497x9g.90tywagljobla1zs
	I0211 02:54:27.196173  334684 out.go:235]   - Configuring RBAC rules ...
	I0211 02:54:27.196304  334684 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0211 02:54:27.198936  334684 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0211 02:54:27.206990  334684 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0211 02:54:27.210188  334684 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0211 02:54:27.213839  334684 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0211 02:54:27.216562  334684 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0211 02:54:27.553284  334684 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0211 02:54:27.970044  334684 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0211 02:54:28.553849  334684 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0211 02:54:28.554112  334684 kubeadm.go:310] 
	I0211 02:54:28.554216  334684 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0211 02:54:28.554231  334684 kubeadm.go:310] 
	I0211 02:54:28.554344  334684 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0211 02:54:28.554354  334684 kubeadm.go:310] 
	I0211 02:54:28.554396  334684 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0211 02:54:28.554486  334684 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0211 02:54:28.554565  334684 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0211 02:54:28.554575  334684 kubeadm.go:310] 
	I0211 02:54:28.554645  334684 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0211 02:54:28.554655  334684 kubeadm.go:310] 
	I0211 02:54:28.554763  334684 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0211 02:54:28.554773  334684 kubeadm.go:310] 
	I0211 02:54:28.554832  334684 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0211 02:54:28.554930  334684 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0211 02:54:28.555021  334684 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0211 02:54:28.555041  334684 kubeadm.go:310] 
	I0211 02:54:28.555140  334684 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0211 02:54:28.555227  334684 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0211 02:54:28.555234  334684 kubeadm.go:310] 
	I0211 02:54:28.555319  334684 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 497x9g.90tywagljobla1zs \
	I0211 02:54:28.555430  334684 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2585e5533b2c5436f5c33785db0dba3d71e3104cee8f0548f45ec36ce8746 \
	I0211 02:54:28.555465  334684 kubeadm.go:310] 	--control-plane 
	I0211 02:54:28.555471  334684 kubeadm.go:310] 
	I0211 02:54:28.555579  334684 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0211 02:54:28.555585  334684 kubeadm.go:310] 
	I0211 02:54:28.555671  334684 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 497x9g.90tywagljobla1zs \
	I0211 02:54:28.555790  334684 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e8f2585e5533b2c5436f5c33785db0dba3d71e3104cee8f0548f45ec36ce8746 
	I0211 02:54:28.559514  334684 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0211 02:54:28.559813  334684 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-gcp\n", err: exit status 1
	I0211 02:54:28.559974  334684 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 02:54:28.560009  334684 cni.go:84] Creating CNI manager for "calico"
	I0211 02:54:28.561985  334684 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0211 02:54:28.563520  334684 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0211 02:54:28.563548  334684 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (324369 bytes)
	I0211 02:54:28.582468  334684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0211 02:54:30.055096  334684 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.472589388s)
	I0211 02:54:30.055156  334684 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0211 02:54:30.055268  334684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:54:30.055299  334684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-065740 minikube.k8s.io/updated_at=2025_02_11T02_54_30_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=8e91f70b9b442caa4bec80b031add390ac34d321 minikube.k8s.io/name=calico-065740 minikube.k8s.io/primary=true
	I0211 02:54:30.173452  334684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:54:30.173580  334684 ops.go:34] apiserver oom_adj: -16
	I0211 02:54:30.673719  334684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:54:31.174342  334684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:54:31.674335  334684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:54:32.174141  334684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:54:32.673609  334684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:54:33.174250  334684 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:54:33.382606  334684 kubeadm.go:1113] duration metric: took 3.327399601s to wait for elevateKubeSystemPrivileges
	I0211 02:54:33.382637  334684 kubeadm.go:394] duration metric: took 15.729595119s to StartCluster
	I0211 02:54:33.382654  334684 settings.go:142] acquiring lock: {Name:mkab2b143b733b0f17bed345e030250b8d37f745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:54:33.382718  334684 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20400-12240/kubeconfig
	I0211 02:54:33.383876  334684 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12240/kubeconfig: {Name:mk7d609b79772e5fa84ecd6d15f2188446c79bf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:54:33.384100  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0211 02:54:33.384140  334684 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 02:54:33.384220  334684 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0211 02:54:33.384320  334684 addons.go:69] Setting storage-provisioner=true in profile "calico-065740"
	I0211 02:54:33.384327  334684 config.go:182] Loaded profile config "calico-065740": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:54:33.384341  334684 addons.go:238] Setting addon storage-provisioner=true in "calico-065740"
	I0211 02:54:33.384372  334684 host.go:66] Checking if "calico-065740" exists ...
	I0211 02:54:33.384388  334684 addons.go:69] Setting default-storageclass=true in profile "calico-065740"
	I0211 02:54:33.384401  334684 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-065740"
	I0211 02:54:33.384769  334684 cli_runner.go:164] Run: docker container inspect calico-065740 --format={{.State.Status}}
	I0211 02:54:33.384892  334684 cli_runner.go:164] Run: docker container inspect calico-065740 --format={{.State.Status}}
	I0211 02:54:33.386384  334684 out.go:177] * Verifying Kubernetes components...
	I0211 02:54:33.387844  334684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 02:54:33.407676  334684 addons.go:238] Setting addon default-storageclass=true in "calico-065740"
	I0211 02:54:33.407711  334684 host.go:66] Checking if "calico-065740" exists ...
	I0211 02:54:33.408022  334684 cli_runner.go:164] Run: docker container inspect calico-065740 --format={{.State.Status}}
	I0211 02:54:33.410841  334684 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 02:54:33.412466  334684 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 02:54:33.412492  334684 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0211 02:54:33.412540  334684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-065740
	I0211 02:54:33.431123  334684 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0211 02:54:33.431151  334684 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0211 02:54:33.431216  334684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-065740
	I0211 02:54:33.441334  334684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/calico-065740/id_rsa Username:docker}
	I0211 02:54:33.457876  334684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/calico-065740/id_rsa Username:docker}
	I0211 02:54:33.616905  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0211 02:54:33.644180  334684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 02:54:33.648989  334684 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0211 02:54:33.726420  334684 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 02:54:34.143580  334684 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0211 02:54:34.390174  334684 node_ready.go:35] waiting up to 15m0s for node "calico-065740" to be "Ready" ...
	I0211 02:54:34.396588  334684 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0211 02:54:34.397843  334684 addons.go:514] duration metric: took 1.013621464s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0211 02:54:34.647611  334684 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-065740" context rescaled to 1 replicas
	I0211 02:54:36.393746  334684 node_ready.go:53] node "calico-065740" has status "Ready":"False"
	I0211 02:54:38.893818  334684 node_ready.go:53] node "calico-065740" has status "Ready":"False"
	I0211 02:54:39.892897  334684 node_ready.go:49] node "calico-065740" has status "Ready":"True"
	I0211 02:54:39.893021  334684 node_ready.go:38] duration metric: took 5.502814569s for node "calico-065740" to be "Ready" ...
	I0211 02:54:39.893043  334684 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 02:54:39.896377  334684 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace to be "Ready" ...
	I0211 02:54:41.902267  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:54:44.401957  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:54:46.901041  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:54:48.901836  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:54:50.901962  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:54:52.902258  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:54:55.401706  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:54:57.901868  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:00.408883  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:02.902283  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:04.903788  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:07.401765  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:09.901763  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:11.901847  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:14.401834  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:16.401977  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:18.402318  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:20.902185  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:23.402152  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:25.404053  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:27.902417  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:30.402352  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:32.402451  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:34.404245  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:36.903487  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:39.401712  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:41.401791  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:43.901897  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:45.902549  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:48.401090  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:50.401864  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:52.402360  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:54.901754  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:57.401684  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:55:59.401734  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:01.901865  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:04.401575  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:06.901527  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:08.901820  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:11.401070  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:13.401754  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:15.901564  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:17.901596  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:19.901890  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:22.401877  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:24.902590  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:27.402051  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:29.402486  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:31.902374  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:34.401885  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:36.902036  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:38.902490  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:41.401632  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:43.401804  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:45.902141  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:48.401768  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:50.402132  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:52.403859  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:54.901853  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:56.901962  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:56:59.401871  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:01.402048  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:03.901863  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:05.902050  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:08.401042  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:10.401819  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:12.402089  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:14.902016  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:17.402376  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:19.901897  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:22.401855  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:24.901552  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:27.401135  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:29.901805  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:31.902224  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:34.402016  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:36.901871  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:39.401767  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:41.401852  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:43.901283  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:45.901625  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:48.401873  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:50.901717  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:52.902040  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:54.902210  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:57.401693  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:57:59.402746  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:01.901548  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:03.901813  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:05.901997  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:08.401634  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:10.401687  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:12.901514  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:15.401977  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:17.901474  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:19.902029  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:22.401646  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:24.903375  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:27.403065  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:29.901621  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:31.901834  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:34.401714  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:36.401803  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:38.901689  334684 pod_ready.go:103] pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:39.902370  334684 pod_ready.go:82] duration metric: took 4m0.005918677s for pod "calico-kube-controllers-77969b7d87-s7lml" in "kube-system" namespace to be "Ready" ...
	E0211 02:58:39.902404  334684 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0211 02:58:39.902415  334684 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-8wh8m" in "kube-system" namespace to be "Ready" ...
	I0211 02:58:41.907957  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:43.908092  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:46.407872  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:48.408161  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:50.907684  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:52.908475  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:54.908814  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:57.407200  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:58:59.407411  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:01.408155  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:03.408233  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:05.409571  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:07.907798  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:10.408310  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:12.908326  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:15.407395  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:17.407856  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:19.908474  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:21.909523  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:24.407681  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:26.907488  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:28.908316  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:31.407322  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:33.407375  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:35.407511  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:37.407560  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:39.407926  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:41.908654  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:44.407578  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:46.908087  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:49.407848  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:51.408065  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:53.908022  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:56.407659  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:58.908125  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:01.407804  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:03.407924  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:05.408243  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:07.908066  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:10.408416  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:12.908365  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:14.921569  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:17.407680  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:19.408451  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:21.907721  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:23.908089  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:25.909687  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:28.407156  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:30.407716  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:32.908092  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:35.409708  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:37.908176  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:40.407266  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:42.408036  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:44.908294  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:47.407581  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:49.407802  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:51.407958  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:53.408487  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:55.907583  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:00:57.908535  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:00.409725  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:02.907471  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:04.908086  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:07.408143  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:09.408257  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:11.908205  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:14.407786  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:16.907517  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:19.407680  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:21.908210  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:24.407486  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:26.908023  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:28.908061  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:31.407780  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:33.908071  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:35.908305  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:38.407331  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:40.408557  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:42.907377  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:44.908000  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:46.909486  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:49.408263  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:51.907811  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:53.907875  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:55.908357  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:01:58.408255  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:00.908173  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:03.408591  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:05.907502  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:07.908773  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:10.407496  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:12.408533  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:14.908532  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:17.408005  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:19.408860  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:21.907570  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:23.907924  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:25.907962  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:28.407168  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:30.407775  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:32.408386  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:34.908287  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:37.408165  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:39.408205  334684 pod_ready.go:103] pod "calico-node-8wh8m" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:39.907603  334684 pod_ready.go:82] duration metric: took 4m0.005174583s for pod "calico-node-8wh8m" in "kube-system" namespace to be "Ready" ...
	E0211 03:02:39.907627  334684 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0211 03:02:39.907635  334684 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace to be "Ready" ...
	I0211 03:02:41.913448  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:44.412793  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:46.912540  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:48.913496  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:51.412336  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:53.412397  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:55.413075  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:57.912738  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:02:59.912878  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:02.412719  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:04.912581  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:06.913012  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:08.913042  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:11.413127  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:13.912373  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:15.913407  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:18.413258  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:20.912782  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:23.412620  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:25.912341  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:27.912916  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:29.913540  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:32.412867  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:34.912975  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:37.413254  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:39.413303  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:41.912790  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:44.412701  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:46.412809  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:48.913176  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:51.413375  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:53.913104  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:55.913192  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:03:58.413062  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:00.913627  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:03.412486  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:05.413427  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:07.912997  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:10.413145  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:12.413345  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:14.912241  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:16.912412  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:18.912694  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:20.912988  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:23.413014  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:25.413128  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:27.913072  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:30.412772  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:32.413108  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:34.413183  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:36.912952  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:39.413071  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:41.413341  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:43.912707  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:46.412829  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:48.912952  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:50.913117  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:53.413006  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:55.413210  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:04:57.913427  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:00.412576  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:02.412637  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:04.412879  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:06.414959  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:08.913311  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:11.413040  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:13.912789  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:16.412862  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:18.913101  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:20.913349  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:23.413066  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:25.912394  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:28.412585  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:30.412805  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:32.913639  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:35.412948  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:37.413400  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:39.912718  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:42.412688  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:44.912664  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:46.913102  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:49.413260  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:51.913628  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:54.412890  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:56.912809  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:05:59.413240  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:01.912505  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:03.912569  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:05.913362  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:07.913447  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:10.412645  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:12.912052  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:14.912971  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:17.412763  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:19.912740  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:21.913209  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:24.412681  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:26.912717  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:29.413004  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:31.413242  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:33.912782  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:36.413101  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:38.912320  334684 pod_ready.go:103] pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace has status "Ready":"False"
	I0211 03:06:39.912784  334684 pod_ready.go:82] duration metric: took 4m0.005136804s for pod "coredns-668d6bf9bc-llj6k" in "kube-system" namespace to be "Ready" ...
	E0211 03:06:39.912807  334684 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0211 03:06:39.912814  334684 pod_ready.go:79] waiting up to 15m0s for pod "etcd-calico-065740" in "kube-system" namespace to be "Ready" ...
	I0211 03:06:39.917871  334684 pod_ready.go:93] pod "etcd-calico-065740" in "kube-system" namespace has status "Ready":"True"
	I0211 03:06:39.917893  334684 pod_ready.go:82] duration metric: took 5.073472ms for pod "etcd-calico-065740" in "kube-system" namespace to be "Ready" ...
	I0211 03:06:39.917903  334684 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-calico-065740" in "kube-system" namespace to be "Ready" ...
	I0211 03:06:39.922005  334684 pod_ready.go:93] pod "kube-apiserver-calico-065740" in "kube-system" namespace has status "Ready":"True"
	I0211 03:06:39.922028  334684 pod_ready.go:82] duration metric: took 4.116715ms for pod "kube-apiserver-calico-065740" in "kube-system" namespace to be "Ready" ...
	I0211 03:06:39.922038  334684 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-calico-065740" in "kube-system" namespace to be "Ready" ...
	I0211 03:06:39.925594  334684 pod_ready.go:93] pod "kube-controller-manager-calico-065740" in "kube-system" namespace has status "Ready":"True"
	I0211 03:06:39.925617  334684 pod_ready.go:82] duration metric: took 3.57176ms for pod "kube-controller-manager-calico-065740" in "kube-system" namespace to be "Ready" ...
	I0211 03:06:39.925631  334684 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-cw9g9" in "kube-system" namespace to be "Ready" ...
	I0211 03:06:39.929175  334684 pod_ready.go:93] pod "kube-proxy-cw9g9" in "kube-system" namespace has status "Ready":"True"
	I0211 03:06:39.929197  334684 pod_ready.go:82] duration metric: took 3.558466ms for pod "kube-proxy-cw9g9" in "kube-system" namespace to be "Ready" ...
	I0211 03:06:39.929208  334684 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-calico-065740" in "kube-system" namespace to be "Ready" ...
	I0211 03:06:40.310874  334684 pod_ready.go:93] pod "kube-scheduler-calico-065740" in "kube-system" namespace has status "Ready":"True"
	I0211 03:06:40.310899  334684 pod_ready.go:82] duration metric: took 381.685696ms for pod "kube-scheduler-calico-065740" in "kube-system" namespace to be "Ready" ...
	I0211 03:06:40.310911  334684 pod_ready.go:39] duration metric: took 12m0.417850325s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 03:06:40.310931  334684 api_server.go:52] waiting for apiserver process to appear ...
	I0211 03:06:40.310970  334684 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:06:40.311019  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:06:40.346451  334684 cri.go:89] found id: "bb960d8c24166ad4a8bef14bb40e7abf839f2bf8ebd4eeabdd6a74aaf9d90092"
	I0211 03:06:40.346473  334684 cri.go:89] found id: ""
	I0211 03:06:40.346479  334684 logs.go:282] 1 containers: [bb960d8c24166ad4a8bef14bb40e7abf839f2bf8ebd4eeabdd6a74aaf9d90092]
	I0211 03:06:40.346535  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:40.350015  334684 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:06:40.350082  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:06:40.382638  334684 cri.go:89] found id: "395842c08fa9275d5bad0b5422f2c4b8be3ac906127098b7a1022d62deeaa41e"
	I0211 03:06:40.382663  334684 cri.go:89] found id: ""
	I0211 03:06:40.382671  334684 logs.go:282] 1 containers: [395842c08fa9275d5bad0b5422f2c4b8be3ac906127098b7a1022d62deeaa41e]
	I0211 03:06:40.382713  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:40.385999  334684 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:06:40.386047  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:06:40.417717  334684 cri.go:89] found id: ""
	I0211 03:06:40.417743  334684 logs.go:282] 0 containers: []
	W0211 03:06:40.417760  334684 logs.go:284] No container was found matching "coredns"
	I0211 03:06:40.417766  334684 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:06:40.417813  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:06:40.451084  334684 cri.go:89] found id: "a5b40821ba9985247b7ffe1b980dbbb27016f3e937f8b8d124472075c4f2b632"
	I0211 03:06:40.451109  334684 cri.go:89] found id: ""
	I0211 03:06:40.451122  334684 logs.go:282] 1 containers: [a5b40821ba9985247b7ffe1b980dbbb27016f3e937f8b8d124472075c4f2b632]
	I0211 03:06:40.451167  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:40.454430  334684 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:06:40.454504  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:06:40.486638  334684 cri.go:89] found id: "17e94eb32a28c089aabebf6f73103ca089dfbdbcdc11cb7db24ae7463afb88f4"
	I0211 03:06:40.486663  334684 cri.go:89] found id: ""
	I0211 03:06:40.486672  334684 logs.go:282] 1 containers: [17e94eb32a28c089aabebf6f73103ca089dfbdbcdc11cb7db24ae7463afb88f4]
	I0211 03:06:40.486730  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:40.489960  334684 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:06:40.490030  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:06:40.522250  334684 cri.go:89] found id: "ea8d59a9e2711b48509e206afc1d93853ccd0847a3ed2518abd8bb356e255d50"
	I0211 03:06:40.522272  334684 cri.go:89] found id: ""
	I0211 03:06:40.522280  334684 logs.go:282] 1 containers: [ea8d59a9e2711b48509e206afc1d93853ccd0847a3ed2518abd8bb356e255d50]
	I0211 03:06:40.522338  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:40.525770  334684 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:06:40.525828  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:06:40.558931  334684 cri.go:89] found id: ""
	I0211 03:06:40.558953  334684 logs.go:282] 0 containers: []
	W0211 03:06:40.558960  334684 logs.go:284] No container was found matching "kindnet"
	I0211 03:06:40.558981  334684 logs.go:123] Gathering logs for kube-apiserver [bb960d8c24166ad4a8bef14bb40e7abf839f2bf8ebd4eeabdd6a74aaf9d90092] ...
	I0211 03:06:40.558998  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb960d8c24166ad4a8bef14bb40e7abf839f2bf8ebd4eeabdd6a74aaf9d90092"
	I0211 03:06:40.597040  334684 logs.go:123] Gathering logs for etcd [395842c08fa9275d5bad0b5422f2c4b8be3ac906127098b7a1022d62deeaa41e] ...
	I0211 03:06:40.597067  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 395842c08fa9275d5bad0b5422f2c4b8be3ac906127098b7a1022d62deeaa41e"
	I0211 03:06:40.634198  334684 logs.go:123] Gathering logs for kube-scheduler [a5b40821ba9985247b7ffe1b980dbbb27016f3e937f8b8d124472075c4f2b632] ...
	I0211 03:06:40.634228  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b40821ba9985247b7ffe1b980dbbb27016f3e937f8b8d124472075c4f2b632"
	I0211 03:06:40.674747  334684 logs.go:123] Gathering logs for kube-proxy [17e94eb32a28c089aabebf6f73103ca089dfbdbcdc11cb7db24ae7463afb88f4] ...
	I0211 03:06:40.674781  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17e94eb32a28c089aabebf6f73103ca089dfbdbcdc11cb7db24ae7463afb88f4"
	I0211 03:06:40.708847  334684 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:06:40.708874  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:06:40.776762  334684 logs.go:123] Gathering logs for container status ...
	I0211 03:06:40.776799  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:06:40.814163  334684 logs.go:123] Gathering logs for kubelet ...
	I0211 03:06:40.814196  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:06:40.969572  334684 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:06:40.969610  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0211 03:06:41.055817  334684 logs.go:123] Gathering logs for dmesg ...
	I0211 03:06:41.055848  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:06:41.082838  334684 logs.go:123] Gathering logs for kube-controller-manager [ea8d59a9e2711b48509e206afc1d93853ccd0847a3ed2518abd8bb356e255d50] ...
	I0211 03:06:41.082870  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea8d59a9e2711b48509e206afc1d93853ccd0847a3ed2518abd8bb356e255d50"
	I0211 03:06:43.629521  334684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:06:43.640471  334684 api_server.go:72] duration metric: took 12m10.256296917s to wait for apiserver process to appear ...
	I0211 03:06:43.640499  334684 api_server.go:88] waiting for apiserver healthz status ...
	I0211 03:06:43.640542  334684 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:06:43.640599  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:06:43.673585  334684 cri.go:89] found id: "bb960d8c24166ad4a8bef14bb40e7abf839f2bf8ebd4eeabdd6a74aaf9d90092"
	I0211 03:06:43.673611  334684 cri.go:89] found id: ""
	I0211 03:06:43.673620  334684 logs.go:282] 1 containers: [bb960d8c24166ad4a8bef14bb40e7abf839f2bf8ebd4eeabdd6a74aaf9d90092]
	I0211 03:06:43.673673  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:43.677037  334684 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:06:43.677089  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:06:43.708993  334684 cri.go:89] found id: "395842c08fa9275d5bad0b5422f2c4b8be3ac906127098b7a1022d62deeaa41e"
	I0211 03:06:43.709015  334684 cri.go:89] found id: ""
	I0211 03:06:43.709025  334684 logs.go:282] 1 containers: [395842c08fa9275d5bad0b5422f2c4b8be3ac906127098b7a1022d62deeaa41e]
	I0211 03:06:43.709082  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:43.712411  334684 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:06:43.712478  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:06:43.744456  334684 cri.go:89] found id: ""
	I0211 03:06:43.744481  334684 logs.go:282] 0 containers: []
	W0211 03:06:43.744489  334684 logs.go:284] No container was found matching "coredns"
	I0211 03:06:43.744495  334684 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:06:43.744544  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:06:43.777015  334684 cri.go:89] found id: "a5b40821ba9985247b7ffe1b980dbbb27016f3e937f8b8d124472075c4f2b632"
	I0211 03:06:43.777035  334684 cri.go:89] found id: ""
	I0211 03:06:43.777042  334684 logs.go:282] 1 containers: [a5b40821ba9985247b7ffe1b980dbbb27016f3e937f8b8d124472075c4f2b632]
	I0211 03:06:43.777101  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:43.780375  334684 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:06:43.780430  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:06:43.811922  334684 cri.go:89] found id: "17e94eb32a28c089aabebf6f73103ca089dfbdbcdc11cb7db24ae7463afb88f4"
	I0211 03:06:43.811944  334684 cri.go:89] found id: ""
	I0211 03:06:43.811950  334684 logs.go:282] 1 containers: [17e94eb32a28c089aabebf6f73103ca089dfbdbcdc11cb7db24ae7463afb88f4]
	I0211 03:06:43.811997  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:43.815367  334684 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:06:43.815431  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:06:43.848011  334684 cri.go:89] found id: "ea8d59a9e2711b48509e206afc1d93853ccd0847a3ed2518abd8bb356e255d50"
	I0211 03:06:43.848031  334684 cri.go:89] found id: ""
	I0211 03:06:43.848043  334684 logs.go:282] 1 containers: [ea8d59a9e2711b48509e206afc1d93853ccd0847a3ed2518abd8bb356e255d50]
	I0211 03:06:43.848717  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:43.853315  334684 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:06:43.853422  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:06:43.885339  334684 cri.go:89] found id: ""
	I0211 03:06:43.885364  334684 logs.go:282] 0 containers: []
	W0211 03:06:43.885374  334684 logs.go:284] No container was found matching "kindnet"
	I0211 03:06:43.885389  334684 logs.go:123] Gathering logs for kube-scheduler [a5b40821ba9985247b7ffe1b980dbbb27016f3e937f8b8d124472075c4f2b632] ...
	I0211 03:06:43.885408  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b40821ba9985247b7ffe1b980dbbb27016f3e937f8b8d124472075c4f2b632"
	I0211 03:06:43.925715  334684 logs.go:123] Gathering logs for kube-proxy [17e94eb32a28c089aabebf6f73103ca089dfbdbcdc11cb7db24ae7463afb88f4] ...
	I0211 03:06:43.925747  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17e94eb32a28c089aabebf6f73103ca089dfbdbcdc11cb7db24ae7463afb88f4"
	I0211 03:06:43.959254  334684 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:06:43.959285  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:06:44.025762  334684 logs.go:123] Gathering logs for dmesg ...
	I0211 03:06:44.025809  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:06:44.051510  334684 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:06:44.051543  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0211 03:06:44.135435  334684 logs.go:123] Gathering logs for etcd [395842c08fa9275d5bad0b5422f2c4b8be3ac906127098b7a1022d62deeaa41e] ...
	I0211 03:06:44.135459  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 395842c08fa9275d5bad0b5422f2c4b8be3ac906127098b7a1022d62deeaa41e"
	I0211 03:06:44.173248  334684 logs.go:123] Gathering logs for container status ...
	I0211 03:06:44.173279  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:06:44.210501  334684 logs.go:123] Gathering logs for kubelet ...
	I0211 03:06:44.210529  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:06:44.365357  334684 logs.go:123] Gathering logs for kube-apiserver [bb960d8c24166ad4a8bef14bb40e7abf839f2bf8ebd4eeabdd6a74aaf9d90092] ...
	I0211 03:06:44.365391  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb960d8c24166ad4a8bef14bb40e7abf839f2bf8ebd4eeabdd6a74aaf9d90092"
	I0211 03:06:44.406151  334684 logs.go:123] Gathering logs for kube-controller-manager [ea8d59a9e2711b48509e206afc1d93853ccd0847a3ed2518abd8bb356e255d50] ...
	I0211 03:06:44.406180  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea8d59a9e2711b48509e206afc1d93853ccd0847a3ed2518abd8bb356e255d50"
	I0211 03:06:46.951442  334684 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0211 03:06:46.956589  334684 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0211 03:06:46.961422  334684 api_server.go:141] control plane version: v1.32.1
	I0211 03:06:46.961467  334684 api_server.go:131] duration metric: took 3.32096018s to wait for apiserver health ...
	I0211 03:06:46.961478  334684 system_pods.go:43] waiting for kube-system pods to appear ...
	I0211 03:06:46.961507  334684 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:06:46.961567  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:06:47.022378  334684 cri.go:89] found id: "bb960d8c24166ad4a8bef14bb40e7abf839f2bf8ebd4eeabdd6a74aaf9d90092"
	I0211 03:06:47.022401  334684 cri.go:89] found id: ""
	I0211 03:06:47.022409  334684 logs.go:282] 1 containers: [bb960d8c24166ad4a8bef14bb40e7abf839f2bf8ebd4eeabdd6a74aaf9d90092]
	I0211 03:06:47.022467  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:47.026024  334684 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:06:47.026094  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:06:47.058836  334684 cri.go:89] found id: "395842c08fa9275d5bad0b5422f2c4b8be3ac906127098b7a1022d62deeaa41e"
	I0211 03:06:47.058861  334684 cri.go:89] found id: ""
	I0211 03:06:47.058870  334684 logs.go:282] 1 containers: [395842c08fa9275d5bad0b5422f2c4b8be3ac906127098b7a1022d62deeaa41e]
	I0211 03:06:47.058920  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:47.062270  334684 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:06:47.062322  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:06:47.095200  334684 cri.go:89] found id: ""
	I0211 03:06:47.095222  334684 logs.go:282] 0 containers: []
	W0211 03:06:47.095229  334684 logs.go:284] No container was found matching "coredns"
	I0211 03:06:47.095234  334684 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:06:47.095287  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:06:47.129280  334684 cri.go:89] found id: "a5b40821ba9985247b7ffe1b980dbbb27016f3e937f8b8d124472075c4f2b632"
	I0211 03:06:47.129301  334684 cri.go:89] found id: ""
	I0211 03:06:47.129308  334684 logs.go:282] 1 containers: [a5b40821ba9985247b7ffe1b980dbbb27016f3e937f8b8d124472075c4f2b632]
	I0211 03:06:47.129355  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:47.132779  334684 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:06:47.132830  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:06:47.164295  334684 cri.go:89] found id: "17e94eb32a28c089aabebf6f73103ca089dfbdbcdc11cb7db24ae7463afb88f4"
	I0211 03:06:47.164321  334684 cri.go:89] found id: ""
	I0211 03:06:47.164330  334684 logs.go:282] 1 containers: [17e94eb32a28c089aabebf6f73103ca089dfbdbcdc11cb7db24ae7463afb88f4]
	I0211 03:06:47.164375  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:47.167749  334684 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:06:47.167813  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:06:47.198907  334684 cri.go:89] found id: "ea8d59a9e2711b48509e206afc1d93853ccd0847a3ed2518abd8bb356e255d50"
	I0211 03:06:47.198929  334684 cri.go:89] found id: ""
	I0211 03:06:47.198937  334684 logs.go:282] 1 containers: [ea8d59a9e2711b48509e206afc1d93853ccd0847a3ed2518abd8bb356e255d50]
	I0211 03:06:47.198989  334684 ssh_runner.go:195] Run: which crictl
	I0211 03:06:47.202242  334684 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:06:47.202310  334684 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:06:47.236187  334684 cri.go:89] found id: ""
	I0211 03:06:47.236213  334684 logs.go:282] 0 containers: []
	W0211 03:06:47.236226  334684 logs.go:284] No container was found matching "kindnet"
	I0211 03:06:47.236243  334684 logs.go:123] Gathering logs for kube-proxy [17e94eb32a28c089aabebf6f73103ca089dfbdbcdc11cb7db24ae7463afb88f4] ...
	I0211 03:06:47.236258  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17e94eb32a28c089aabebf6f73103ca089dfbdbcdc11cb7db24ae7463afb88f4"
	I0211 03:06:47.268498  334684 logs.go:123] Gathering logs for kube-controller-manager [ea8d59a9e2711b48509e206afc1d93853ccd0847a3ed2518abd8bb356e255d50] ...
	I0211 03:06:47.268527  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea8d59a9e2711b48509e206afc1d93853ccd0847a3ed2518abd8bb356e255d50"
	I0211 03:06:47.314805  334684 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:06:47.314835  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:06:47.382011  334684 logs.go:123] Gathering logs for container status ...
	I0211 03:06:47.382043  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:06:47.420208  334684 logs.go:123] Gathering logs for etcd [395842c08fa9275d5bad0b5422f2c4b8be3ac906127098b7a1022d62deeaa41e] ...
	I0211 03:06:47.420236  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 395842c08fa9275d5bad0b5422f2c4b8be3ac906127098b7a1022d62deeaa41e"
	I0211 03:06:47.459137  334684 logs.go:123] Gathering logs for kube-scheduler [a5b40821ba9985247b7ffe1b980dbbb27016f3e937f8b8d124472075c4f2b632] ...
	I0211 03:06:47.459166  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b40821ba9985247b7ffe1b980dbbb27016f3e937f8b8d124472075c4f2b632"
	I0211 03:06:47.502409  334684 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:06:47.502442  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0211 03:06:47.583506  334684 logs.go:123] Gathering logs for kube-apiserver [bb960d8c24166ad4a8bef14bb40e7abf839f2bf8ebd4eeabdd6a74aaf9d90092] ...
	I0211 03:06:47.583532  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb960d8c24166ad4a8bef14bb40e7abf839f2bf8ebd4eeabdd6a74aaf9d90092"
	I0211 03:06:47.624311  334684 logs.go:123] Gathering logs for kubelet ...
	I0211 03:06:47.624343  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:06:47.780159  334684 logs.go:123] Gathering logs for dmesg ...
	I0211 03:06:47.780191  334684 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:06:50.311753  334684 system_pods.go:59] 9 kube-system pods found
	I0211 03:06:50.311798  334684 system_pods.go:61] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:06:50.311810  334684 system_pods.go:61] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:06:50.311817  334684 system_pods.go:61] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:06:50.311822  334684 system_pods.go:61] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:06:50.311827  334684 system_pods.go:61] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:06:50.311832  334684 system_pods.go:61] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:06:50.311839  334684 system_pods.go:61] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:06:50.311842  334684 system_pods.go:61] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:06:50.311845  334684 system_pods.go:61] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:06:50.311852  334684 system_pods.go:74] duration metric: took 3.350368245s to wait for pod list to return data ...
	I0211 03:06:50.311861  334684 default_sa.go:34] waiting for default service account to be created ...
	I0211 03:06:50.314110  334684 default_sa.go:45] found service account: "default"
	I0211 03:06:50.314131  334684 default_sa.go:55] duration metric: took 2.265447ms for default service account to be created ...
	I0211 03:06:50.314139  334684 system_pods.go:116] waiting for k8s-apps to be running ...
	I0211 03:06:50.316409  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:06:50.316437  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:06:50.316445  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:06:50.316452  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:06:50.316459  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:06:50.316464  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:06:50.316471  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:06:50.316474  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:06:50.316478  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:06:50.316483  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:06:50.316516  334684 retry.go:31] will retry after 248.956262ms: missing components: kube-dns
	I0211 03:06:50.569040  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:06:50.569071  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:06:50.569080  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:06:50.569087  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:06:50.569092  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:06:50.569098  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:06:50.569102  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:06:50.569106  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:06:50.569110  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:06:50.569113  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:06:50.569128  334684 retry.go:31] will retry after 372.031094ms: missing components: kube-dns
	I0211 03:06:50.945399  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:06:50.945436  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:06:50.945449  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:06:50.945460  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:06:50.945466  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:06:50.945474  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:06:50.945479  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:06:50.945486  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:06:50.945492  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:06:50.945500  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:06:50.945519  334684 retry.go:31] will retry after 447.125157ms: missing components: kube-dns
	I0211 03:06:51.396137  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:06:51.396174  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:06:51.396186  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:06:51.396196  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:06:51.396204  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:06:51.396212  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:06:51.396218  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:06:51.396228  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:06:51.396234  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:06:51.396241  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:06:51.396260  334684 retry.go:31] will retry after 580.393068ms: missing components: kube-dns
	I0211 03:06:51.980177  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:06:51.980214  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:06:51.980227  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:06:51.980236  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:06:51.980242  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:06:51.980250  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:06:51.980256  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:06:51.980262  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:06:51.980268  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:06:51.980274  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:06:51.980297  334684 retry.go:31] will retry after 581.223625ms: missing components: kube-dns
	I0211 03:06:52.565070  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:06:52.565105  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:06:52.565117  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:06:52.565129  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:06:52.565135  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:06:52.565142  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:06:52.565147  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:06:52.565153  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:06:52.565159  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:06:52.565164  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:06:52.565184  334684 retry.go:31] will retry after 882.761964ms: missing components: kube-dns
	I0211 03:06:53.451685  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:06:53.451722  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:06:53.451733  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:06:53.451741  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:06:53.451746  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:06:53.451752  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:06:53.451755  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:06:53.451759  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:06:53.451764  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:06:53.451770  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:06:53.451785  334684 retry.go:31] will retry after 1.047868667s: missing components: kube-dns
	I0211 03:06:54.502848  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:06:54.502879  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:06:54.502888  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:06:54.502895  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:06:54.502899  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:06:54.502904  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:06:54.502907  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:06:54.502912  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:06:54.502916  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:06:54.502921  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:06:54.502935  334684 retry.go:31] will retry after 1.207742799s: missing components: kube-dns
	I0211 03:06:55.715034  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:06:55.715068  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:06:55.715077  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:06:55.715086  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:06:55.715094  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:06:55.715102  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:06:55.715111  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:06:55.715120  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:06:55.715125  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:06:55.715132  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:06:55.715149  334684 retry.go:31] will retry after 1.459729365s: missing components: kube-dns
	I0211 03:06:57.179727  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:06:57.179833  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:06:57.179867  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:06:57.179901  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:06:57.179926  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:06:57.179948  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:06:57.179971  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:06:57.179998  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:06:57.180020  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:06:57.180040  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:06:57.180066  334684 retry.go:31] will retry after 1.558307975s: missing components: kube-dns
	I0211 03:06:58.742079  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:06:58.742116  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:06:58.742127  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:06:58.742133  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:06:58.742138  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:06:58.742144  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:06:58.742149  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:06:58.742155  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:06:58.742160  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:06:58.742167  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:06:58.742189  334684 retry.go:31] will retry after 2.192595615s: missing components: kube-dns
	I0211 03:07:00.939748  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:07:00.939781  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:07:00.939792  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:07:00.939798  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:07:00.939803  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:07:00.939809  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:07:00.939813  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:07:00.939820  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:07:00.939824  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:07:00.939829  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:07:00.939844  334684 retry.go:31] will retry after 3.161875015s: missing components: kube-dns
	I0211 03:07:04.106286  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:07:04.106324  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:07:04.106337  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:07:04.106343  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:07:04.106348  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:07:04.106353  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:07:04.106357  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:07:04.106362  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:07:04.106365  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:07:04.106369  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:07:04.106386  334684 retry.go:31] will retry after 2.968782314s: missing components: kube-dns
	I0211 03:07:07.079167  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:07:07.079200  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:07:07.079210  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:07:07.079216  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:07:07.079221  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:07:07.079225  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:07:07.079230  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:07:07.079234  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:07:07.079237  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:07:07.079241  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:07:07.079257  334684 retry.go:31] will retry after 5.43786142s: missing components: kube-dns
	I0211 03:07:12.522105  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:07:12.522135  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:07:12.522147  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:07:12.522154  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:07:12.522159  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:07:12.522164  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:07:12.522167  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:07:12.522172  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:07:12.522175  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:07:12.522179  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:07:12.522195  334684 retry.go:31] will retry after 6.342467756s: missing components: kube-dns
	I0211 03:07:18.869224  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:07:18.869255  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:07:18.869266  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:07:18.869274  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:07:18.869281  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:07:18.869286  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:07:18.869290  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:07:18.869294  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:07:18.869298  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:07:18.869303  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:07:18.869318  334684 retry.go:31] will retry after 7.13928547s: missing components: kube-dns
	I0211 03:07:26.012245  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:07:26.012278  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:07:26.012287  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:07:26.012295  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:07:26.012299  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:07:26.012304  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:07:26.012308  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:07:26.012312  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:07:26.012317  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:07:26.012321  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:07:26.012334  334684 retry.go:31] will retry after 10.740114281s: missing components: kube-dns
	I0211 03:07:36.759031  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:07:36.759068  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:07:36.759077  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:07:36.759086  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:07:36.759092  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:07:36.759097  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:07:36.759102  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:07:36.759107  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:07:36.759112  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:07:36.759116  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:07:36.759139  334684 retry.go:31] will retry after 12.005809334s: missing components: kube-dns
	I0211 03:07:48.769633  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:07:48.769667  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:07:48.769707  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:07:48.769721  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:07:48.769735  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:07:48.769745  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:07:48.769752  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:07:48.769759  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:07:48.769763  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:07:48.769770  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:07:48.769786  334684 retry.go:31] will retry after 16.259010772s: missing components: kube-dns
	I0211 03:08:05.033327  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:08:05.033360  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:08:05.033369  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:08:05.033375  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:08:05.033380  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:08:05.033389  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:08:05.033394  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:08:05.033400  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:08:05.033408  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:08:05.033412  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:08:05.033432  334684 retry.go:31] will retry after 17.284063335s: missing components: kube-dns
	I0211 03:08:22.322702  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:08:22.322752  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:08:22.322762  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:08:22.322776  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:08:22.322783  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:08:22.322791  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:08:22.322799  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:08:22.322804  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:08:22.322812  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:08:22.322821  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:08:22.322840  334684 retry.go:31] will retry after 21.534825026s: missing components: kube-dns
	I0211 03:08:43.862045  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:08:43.862081  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:08:43.862091  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:08:43.862100  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:08:43.862107  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:08:43.862114  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:08:43.862120  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:08:43.862134  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:08:43.862139  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:08:43.862144  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:08:43.862164  334684 retry.go:31] will retry after 23.205823683s: missing components: kube-dns
	I0211 03:09:07.072366  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:09:07.072399  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:09:07.072411  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:09:07.072418  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:09:07.072425  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:09:07.072432  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:09:07.072436  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:09:07.072440  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:09:07.072444  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:09:07.072448  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:09:07.072464  334684 retry.go:31] will retry after 33.834731245s: missing components: kube-dns
	I0211 03:09:40.911578  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:09:40.911610  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:09:40.911619  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:09:40.911627  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:09:40.911633  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:09:40.911639  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:09:40.911642  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:09:40.911646  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:09:40.911650  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:09:40.911653  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:09:40.911669  334684 retry.go:31] will retry after 40.914636636s: missing components: kube-dns
	I0211 03:10:21.832500  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:10:21.832536  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:10:21.832545  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:10:21.832551  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:10:21.832556  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:10:21.832561  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:10:21.832565  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:10:21.832571  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:10:21.832575  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:10:21.832578  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:10:21.832596  334684 retry.go:31] will retry after 57.698845616s: missing components: kube-dns
	I0211 03:11:19.536482  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:11:19.536519  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:11:19.536529  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:11:19.536535  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:11:19.536540  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:11:19.536545  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:11:19.536549  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:11:19.536553  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:11:19.536556  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:11:19.536560  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:11:19.536580  334684 retry.go:31] will retry after 53.111528931s: missing components: kube-dns
	I0211 03:12:12.653772  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:12:12.653817  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:12:12.653835  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:12:12.653844  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:12:12.653848  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:12:12.653852  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:12:12.653858  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:12:12.653863  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:12:12.653867  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:12:12.653870  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:12:12.653886  334684 retry.go:31] will retry after 1m4.957164178s: missing components: kube-dns
	I0211 03:13:17.618717  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:13:17.618756  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:13:17.618770  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:13:17.618777  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:13:17.618782  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:13:17.618788  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:13:17.618792  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:13:17.618797  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:13:17.618800  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:13:17.618806  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:13:17.618823  334684 retry.go:31] will retry after 1m7.09660363s: missing components: kube-dns
	I0211 03:14:24.720139  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:14:24.720179  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:14:24.720192  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:14:24.720202  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:14:24.720208  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:14:24.720217  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:14:24.720224  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:14:24.720231  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:14:24.720238  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:14:24.720244  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:14:24.720267  334684 retry.go:31] will retry after 1m3.135941213s: missing components: kube-dns
	I0211 03:15:27.860435  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:15:27.860472  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:15:27.860483  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:15:27.860490  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:15:27.860494  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:15:27.860499  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:15:27.860503  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:15:27.860507  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:15:27.860510  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:15:27.860514  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:15:27.860531  334684 retry.go:31] will retry after 48.343148621s: missing components: kube-dns
	I0211 03:16:16.207868  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:16:16.207907  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:16:16.207920  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:16:16.207927  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:16:16.207935  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:16:16.207941  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:16:16.207945  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:16:16.207950  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:16:16.207953  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:16:16.207957  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:16:16.207972  334684 retry.go:31] will retry after 1m5.537544413s: missing components: kube-dns
	I0211 03:17:21.751773  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:17:21.751808  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:17:21.751820  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:17:21.751827  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:17:21.751832  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:17:21.751837  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:17:21.751841  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:17:21.751848  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:17:21.751852  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:17:21.751858  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:17:21.751875  334684 retry.go:31] will retry after 48.414848251s: missing components: kube-dns
	I0211 03:18:10.172011  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:18:10.172052  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:18:10.172061  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:18:10.172067  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:18:10.172072  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:18:10.172077  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:18:10.172080  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:18:10.172084  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:18:10.172088  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:18:10.172091  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:18:10.172130  334684 retry.go:31] will retry after 1m11.40832854s: missing components: kube-dns
	I0211 03:19:21.587707  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:19:21.587748  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:19:21.587759  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:19:21.587765  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:19:21.587770  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:19:21.587775  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:19:21.587779  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:19:21.587783  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:19:21.587787  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:19:21.587792  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:19:21.587809  334684 retry.go:31] will retry after 55.060198287s: missing components: kube-dns
	I0211 03:20:16.651743  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:20:16.651779  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:20:16.651788  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:20:16.651793  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:20:16.651798  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:20:16.651802  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:20:16.651806  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:20:16.651810  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:20:16.651814  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:20:16.651820  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:20:16.651838  334684 retry.go:31] will retry after 1m4.693195547s: missing components: kube-dns
	I0211 03:21:21.348569  334684 system_pods.go:86] 9 kube-system pods found
	I0211 03:21:21.348603  334684 system_pods.go:89] "calico-kube-controllers-77969b7d87-s7lml" [0dba92f5-717d-41ac-b1c8-b455b6b47ddb] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0211 03:21:21.348615  334684 system_pods.go:89] "calico-node-8wh8m" [e0cf2cf5-bf3f-485d-98d6-605c330b6e59] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0211 03:21:21.348625  334684 system_pods.go:89] "coredns-668d6bf9bc-llj6k" [063714d4-91e1-4309-8d02-44625c926d79] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:21:21.348631  334684 system_pods.go:89] "etcd-calico-065740" [4ea8ec27-4f6e-4569-9fbf-83079582da11] Running
	I0211 03:21:21.348638  334684 system_pods.go:89] "kube-apiserver-calico-065740" [c04f130f-f8ad-46a7-9e7a-42216f3c17d1] Running
	I0211 03:21:21.348646  334684 system_pods.go:89] "kube-controller-manager-calico-065740" [68c8692b-e82b-4ec1-8a3d-8a9bbd5d43e1] Running
	I0211 03:21:21.348653  334684 system_pods.go:89] "kube-proxy-cw9g9" [e1e20cad-8d84-412d-8d91-ad6e6d6ef922] Running
	I0211 03:21:21.348658  334684 system_pods.go:89] "kube-scheduler-calico-065740" [6ca8bac8-0056-47b0-9afb-c906507be65b] Running
	I0211 03:21:21.348666  334684 system_pods.go:89] "storage-provisioner" [64af84a0-c44b-4df6-963c-b27b3eb762df] Running
	I0211 03:21:21.350922  334684 out.go:201] 
	W0211 03:21:21.352452  334684 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0211 03:21:21.352472  334684 out.go:270] * 
	* 
	W0211 03:21:21.353345  334684 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0211 03:21:21.354535  334684 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (1638.77s)

                                                
                                    

Test pass (283/324)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.61
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 4.72
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.2
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.08
21 TestBinaryMirror 0.76
22 TestOffline 66.28
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 117.48
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 8.47
35 TestAddons/parallel/Registry 13.64
37 TestAddons/parallel/InspektorGadget 10.62
38 TestAddons/parallel/MetricsServer 5.67
40 TestAddons/parallel/CSI 64.63
41 TestAddons/parallel/Headlamp 18.4
42 TestAddons/parallel/CloudSpanner 5.64
43 TestAddons/parallel/LocalPath 54.98
44 TestAddons/parallel/NvidiaDevicePlugin 5.47
45 TestAddons/parallel/Yakd 11.89
46 TestAddons/parallel/AmdGpuDevicePlugin 6.45
47 TestAddons/StoppedEnableDisable 12.09
48 TestCertOptions 31.33
49 TestCertExpiration 225.08
51 TestForceSystemdFlag 26.1
54 TestKVMDriverInstallOrUpdate 3.23
58 TestErrorSpam/setup 20.6
59 TestErrorSpam/start 0.58
60 TestErrorSpam/status 0.87
61 TestErrorSpam/pause 1.49
62 TestErrorSpam/unpause 1.7
63 TestErrorSpam/stop 1.35
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 43.68
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 35.52
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.04
75 TestFunctional/serial/CacheCmd/cache/add_local 1.3
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 39.24
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.34
86 TestFunctional/serial/LogsFileCmd 1.37
87 TestFunctional/serial/InvalidService 4.08
89 TestFunctional/parallel/ConfigCmd 0.39
90 TestFunctional/parallel/DashboardCmd 7.84
91 TestFunctional/parallel/DryRun 0.44
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.22
97 TestFunctional/parallel/ServiceCmdConnect 7.7
98 TestFunctional/parallel/AddonsCmd 0.2
101 TestFunctional/parallel/SSHCmd 0.62
102 TestFunctional/parallel/CpCmd 2.13
104 TestFunctional/parallel/FileSync 0.25
105 TestFunctional/parallel/CertSync 1.62
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
113 TestFunctional/parallel/License 0.2
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
116 TestFunctional/parallel/ProfileCmd/profile_list 0.49
117 TestFunctional/parallel/MountCmd/any-port 7.21
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.66
119 TestFunctional/parallel/Version/short 0.05
120 TestFunctional/parallel/Version/components 0.45
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
125 TestFunctional/parallel/ImageCommands/ImageBuild 2.04
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
134 TestFunctional/parallel/MountCmd/specific-port 1.61
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.96
136 TestFunctional/parallel/ServiceCmd/List 0.51
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
141 TestFunctional/parallel/ServiceCmd/Format 0.46
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
145 TestFunctional/parallel/ServiceCmd/URL 0.45
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 102.13
162 TestMultiControlPlane/serial/DeployApp 4.47
163 TestMultiControlPlane/serial/PingHostFromPods 1.04
164 TestMultiControlPlane/serial/AddWorkerNode 33.15
165 TestMultiControlPlane/serial/NodeLabels 0.06
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
167 TestMultiControlPlane/serial/CopyFile 15.86
168 TestMultiControlPlane/serial/StopSecondaryNode 12.48
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
170 TestMultiControlPlane/serial/RestartSecondaryNode 41.63
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 140.48
173 TestMultiControlPlane/serial/DeleteSecondaryNode 11.32
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
175 TestMultiControlPlane/serial/StopCluster 35.64
176 TestMultiControlPlane/serial/RestartCluster 66.29
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
178 TestMultiControlPlane/serial/AddSecondaryNode 45.37
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
183 TestJSONOutput/start/Command 43.13
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.66
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.57
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.76
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
208 TestKicCustomNetwork/create_custom_network 28.43
209 TestKicCustomNetwork/use_default_bridge_network 22.7
210 TestKicExistingNetwork 22.35
211 TestKicCustomSubnet 26.39
212 TestKicStaticIP 24.1
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 49.99
217 TestMountStart/serial/StartWithMountFirst 5.45
218 TestMountStart/serial/VerifyMountFirst 0.24
219 TestMountStart/serial/StartWithMountSecond 8.18
220 TestMountStart/serial/VerifyMountSecond 0.24
221 TestMountStart/serial/DeleteFirst 1.62
222 TestMountStart/serial/VerifyMountPostDelete 0.25
223 TestMountStart/serial/Stop 1.18
224 TestMountStart/serial/RestartStopped 7.26
225 TestMountStart/serial/VerifyMountPostStop 0.24
228 TestMultiNode/serial/FreshStart2Nodes 69.61
229 TestMultiNode/serial/DeployApp2Nodes 3.61
230 TestMultiNode/serial/PingHostFrom2Pods 0.71
231 TestMultiNode/serial/AddNode 27.37
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.61
234 TestMultiNode/serial/CopyFile 8.98
235 TestMultiNode/serial/StopNode 2.1
236 TestMultiNode/serial/StartAfterStop 9.09
237 TestMultiNode/serial/RestartKeepsNodes 105.48
238 TestMultiNode/serial/DeleteNode 5.25
239 TestMultiNode/serial/StopMultiNode 23.73
240 TestMultiNode/serial/RestartMultiNode 45.09
241 TestMultiNode/serial/ValidateNameConflict 26.28
246 TestPreload 103.02
248 TestScheduledStopUnix 96.31
251 TestInsufficientStorage 9.82
252 TestRunningBinaryUpgrade 99.85
254 TestKubernetesUpgrade 358.12
255 TestMissingContainerUpgrade 140.07
256 TestStoppedBinaryUpgrade/Setup 0.54
257 TestStoppedBinaryUpgrade/Upgrade 94.98
258 TestStoppedBinaryUpgrade/MinikubeLogs 0.8
267 TestPause/serial/Start 47.08
269 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
270 TestNoKubernetes/serial/StartWithK8s 25.16
271 TestPause/serial/SecondStartNoReconfiguration 24.92
272 TestNoKubernetes/serial/StartWithStopK8s 6.05
280 TestNetworkPlugins/group/false 3.79
281 TestPause/serial/Pause 0.76
282 TestPause/serial/VerifyStatus 0.34
283 TestPause/serial/Unpause 0.73
284 TestNoKubernetes/serial/Start 7.69
285 TestPause/serial/PauseAgain 0.89
286 TestPause/serial/DeletePaused 2.78
290 TestPause/serial/VerifyDeletedResources 15.75
291 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
292 TestNoKubernetes/serial/ProfileList 19.82
293 TestNoKubernetes/serial/Stop 1.23
294 TestNoKubernetes/serial/StartNoArgs 8.9
295 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
297 TestStartStop/group/old-k8s-version/serial/FirstStart 142.23
299 TestStartStop/group/no-preload/serial/FirstStart 54.16
300 TestStartStop/group/no-preload/serial/DeployApp 8.28
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.83
302 TestStartStop/group/no-preload/serial/Stop 11.87
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
304 TestStartStop/group/no-preload/serial/SecondStart 285.45
306 TestStartStop/group/embed-certs/serial/FirstStart 40.34
307 TestStartStop/group/old-k8s-version/serial/DeployApp 10.43
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.79
309 TestStartStop/group/old-k8s-version/serial/Stop 11.97
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
311 TestStartStop/group/old-k8s-version/serial/SecondStart 130.4
312 TestStartStop/group/embed-certs/serial/DeployApp 8.27
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.87
314 TestStartStop/group/embed-certs/serial/Stop 12.05
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
316 TestStartStop/group/embed-certs/serial/SecondStart 263.58
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.26
319 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
321 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.9
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
323 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 270.95
324 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
326 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
327 TestStartStop/group/old-k8s-version/serial/Pause 2.55
329 TestStartStop/group/newest-cni/serial/FirstStart 26.63
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
332 TestStartStop/group/newest-cni/serial/Stop 1.21
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
334 TestStartStop/group/newest-cni/serial/SecondStart 12.89
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
338 TestStartStop/group/newest-cni/serial/Pause 2.95
339 TestNetworkPlugins/group/auto/Start 42.08
340 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
341 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
342 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
343 TestStartStop/group/no-preload/serial/Pause 2.67
344 TestNetworkPlugins/group/flannel/Start 46.49
345 TestNetworkPlugins/group/auto/KubeletFlags 0.28
346 TestNetworkPlugins/group/auto/NetCatPod 12.22
347 TestNetworkPlugins/group/auto/DNS 0.13
348 TestNetworkPlugins/group/auto/Localhost 0.11
349 TestNetworkPlugins/group/auto/HairPin 0.11
350 TestNetworkPlugins/group/enable-default-cni/Start 36.58
351 TestNetworkPlugins/group/flannel/ControllerPod 6.01
352 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
353 TestNetworkPlugins/group/flannel/NetCatPod 9.23
354 TestNetworkPlugins/group/flannel/DNS 0.12
355 TestNetworkPlugins/group/flannel/Localhost 0.1
356 TestNetworkPlugins/group/flannel/HairPin 0.1
357 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
358 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
359 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.2
360 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
361 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
362 TestStartStop/group/embed-certs/serial/Pause 2.83
363 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
364 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
365 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
366 TestNetworkPlugins/group/bridge/Start 67.43
368 TestNetworkPlugins/group/kindnet/Start 44.95
369 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
370 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
371 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
372 TestNetworkPlugins/group/bridge/NetCatPod 10.19
373 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
374 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
375 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
376 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
377 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.7
378 TestNetworkPlugins/group/bridge/DNS 0.14
379 TestNetworkPlugins/group/bridge/Localhost 0.12
380 TestNetworkPlugins/group/bridge/HairPin 0.11
381 TestNetworkPlugins/group/kindnet/DNS 0.14
382 TestNetworkPlugins/group/kindnet/Localhost 0.11
383 TestNetworkPlugins/group/kindnet/HairPin 0.1
384 TestNetworkPlugins/group/custom-flannel/Start 47.19
385 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
386 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
387 TestNetworkPlugins/group/custom-flannel/DNS 0.12
388 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
389 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (6.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-097685 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-097685 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.614165113s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0211 02:01:47.447743   19028 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0211 02:01:47.447852   19028 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-097685
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-097685: exit status 85 (66.288001ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-097685 | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC |          |
	|         | -p download-only-097685        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/11 02:01:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0211 02:01:40.872473   19040 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:01:40.872583   19040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:01:40.872592   19040 out.go:358] Setting ErrFile to fd 2...
	I0211 02:01:40.872597   19040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:01:40.872757   19040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	W0211 02:01:40.872882   19040 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20400-12240/.minikube/config/config.json: open /home/jenkins/minikube-integration/20400-12240/.minikube/config/config.json: no such file or directory
	I0211 02:01:40.873428   19040 out.go:352] Setting JSON to true
	I0211 02:01:40.874348   19040 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2650,"bootTime":1739236651,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:01:40.874438   19040 start.go:139] virtualization: kvm guest
	I0211 02:01:40.876629   19040 out.go:97] [download-only-097685] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 02:01:40.876797   19040 notify.go:220] Checking for updates...
	W0211 02:01:40.876776   19040 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball: no such file or directory
	I0211 02:01:40.878088   19040 out.go:169] MINIKUBE_LOCATION=20400
	I0211 02:01:40.879407   19040 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:01:40.880659   19040 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	I0211 02:01:40.881826   19040 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	I0211 02:01:40.883068   19040 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0211 02:01:40.885444   19040 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0211 02:01:40.885652   19040 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:01:40.907814   19040 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0211 02:01:40.907890   19040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:01:41.290365   19040 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:52 SystemTime:2025-02-11 02:01:41.281872678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:01:41.290503   19040 docker.go:318] overlay module found
	I0211 02:01:41.292279   19040 out.go:97] Using the docker driver based on user configuration
	I0211 02:01:41.292306   19040 start.go:297] selected driver: docker
	I0211 02:01:41.292313   19040 start.go:901] validating driver "docker" against <nil>
	I0211 02:01:41.292414   19040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:01:41.340887   19040 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:true NGoroutines:52 SystemTime:2025-02-11 02:01:41.332452573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:01:41.341050   19040 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0211 02:01:41.341574   19040 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0211 02:01:41.341735   19040 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0211 02:01:41.343887   19040 out.go:169] Using Docker driver with root privileges
	I0211 02:01:41.345218   19040 cni.go:84] Creating CNI manager for ""
	I0211 02:01:41.345292   19040 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0211 02:01:41.345305   19040 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0211 02:01:41.345371   19040 start.go:340] cluster config:
	{Name:download-only-097685 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-097685 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:01:41.347026   19040 out.go:97] Starting "download-only-097685" primary control-plane node in "download-only-097685" cluster
	I0211 02:01:41.347048   19040 cache.go:121] Beginning downloading kic base image for docker with crio
	I0211 02:01:41.348562   19040 out.go:97] Pulling base image v0.0.46 ...
	I0211 02:01:41.348588   19040 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0211 02:01:41.348706   19040 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0211 02:01:41.365161   19040 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0211 02:01:41.365374   19040 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0211 02:01:41.365488   19040 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0211 02:01:41.376323   19040 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0211 02:01:41.376357   19040 cache.go:56] Caching tarball of preloaded images
	I0211 02:01:41.376491   19040 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0211 02:01:41.378571   19040 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0211 02:01:41.378595   19040 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0211 02:01:41.415749   19040 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0211 02:01:45.734833   19040 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0211 02:01:45.734914   19040 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-097685 host does not exist
	  To start a cluster, run: "minikube start -p download-only-097685"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-097685
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (4.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-753440 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-753440 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.721590124s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (4.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0211 02:01:52.576744   19028 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0211 02:01:52.576791   19028 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20400-12240/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-753440
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-753440: exit status 85 (64.255901ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-097685 | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC |                     |
	|         | -p download-only-097685        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC | 11 Feb 25 02:01 UTC |
	| delete  | -p download-only-097685        | download-only-097685 | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC | 11 Feb 25 02:01 UTC |
	| start   | -o=json --download-only        | download-only-753440 | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC |                     |
	|         | -p download-only-753440        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/11 02:01:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0211 02:01:47.894755   19385 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:01:47.894850   19385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:01:47.894854   19385 out.go:358] Setting ErrFile to fd 2...
	I0211 02:01:47.894858   19385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:01:47.895052   19385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	I0211 02:01:47.895581   19385 out.go:352] Setting JSON to true
	I0211 02:01:47.896429   19385 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2657,"bootTime":1739236651,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:01:47.896492   19385 start.go:139] virtualization: kvm guest
	I0211 02:01:47.898737   19385 out.go:97] [download-only-753440] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 02:01:47.898862   19385 notify.go:220] Checking for updates...
	I0211 02:01:47.900385   19385 out.go:169] MINIKUBE_LOCATION=20400
	I0211 02:01:47.902026   19385 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:01:47.903419   19385 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	I0211 02:01:47.904909   19385 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	I0211 02:01:47.906358   19385 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0211 02:01:47.909096   19385 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0211 02:01:47.909308   19385 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:01:47.930892   19385 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0211 02:01:47.930999   19385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:01:47.978008   19385 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:50 SystemTime:2025-02-11 02:01:47.969493731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:01:47.978099   19385 docker.go:318] overlay module found
	I0211 02:01:47.980388   19385 out.go:97] Using the docker driver based on user configuration
	I0211 02:01:47.980416   19385 start.go:297] selected driver: docker
	I0211 02:01:47.980422   19385 start.go:901] validating driver "docker" against <nil>
	I0211 02:01:47.980515   19385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:01:48.028613   19385 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:50 SystemTime:2025-02-11 02:01:48.020433129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:01:48.028776   19385 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0211 02:01:48.029275   19385 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0211 02:01:48.029431   19385 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0211 02:01:48.031374   19385 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-753440 host does not exist
	  To start a cluster, run: "minikube start -p download-only-753440"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-753440
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.08s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-590741 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-590741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-590741
--- PASS: TestDownloadOnlyKic (1.08s)

                                                
                                    
x
+
TestBinaryMirror (0.76s)

                                                
                                                
=== RUN   TestBinaryMirror
I0211 02:01:54.310080   19028 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-667903 --alsologtostderr --binary-mirror http://127.0.0.1:33743 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-667903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-667903
--- PASS: TestBinaryMirror (0.76s)

                                                
                                    
x
+
TestOffline (66.28s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-417468 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-417468 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m2.381652837s)
helpers_test.go:175: Cleaning up "offline-crio-417468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-417468
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-417468: (3.900452649s)
--- PASS: TestOffline (66.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-652362
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-652362: exit status 85 (51.605418ms)

                                                
                                                
-- stdout --
	* Profile "addons-652362" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-652362"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-652362
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-652362: exit status 85 (50.54928ms)

                                                
                                                
-- stdout --
	* Profile "addons-652362" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-652362"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (117.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-652362 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-652362 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m57.47858599s)
--- PASS: TestAddons/Setup (117.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-652362 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-652362 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-652362 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-652362 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bdf43ecd-1bdb-4c6d-8b9e-34b49a3b909b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bdf43ecd-1bdb-4c6d-8b9e-34b49a3b909b] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003751288s
addons_test.go:633: (dbg) Run:  kubectl --context addons-652362 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-652362 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-652362 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.529565ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-7vlrg" [34653488-a8f9-4101-bc06-960cfcdc4ff1] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003250278s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-d9448" [42a4d4a0-7f74-47a5-9bcd-e482b88b201b] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002819041s
addons_test.go:331: (dbg) Run:  kubectl --context addons-652362 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-652362 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-652362 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.848574169s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 ip
2025/02/11 02:04:22 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.62s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xpfq5" [2d9cb139-e6b8-47ce-a873-2abca182dcb5] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00335783s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-652362 addons disable inspektor-gadget --alsologtostderr -v=1: (5.610864304s)
--- PASS: TestAddons/parallel/InspektorGadget (10.62s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.548141ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I0211 02:04:09.722973   19028 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0211 02:04:09.723000   19028 kapi.go:107] duration metric: took 4.709057ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "metrics-server-7fbb699795-9pqgg" [70491abf-576e-4b84-8626-cc4d3735e6df] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003103256s
addons_test.go:402: (dbg) Run:  kubectl --context addons-652362 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.67s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0211 02:04:09.718303   19028 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.719542ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-652362 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-652362 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f54f955a-abea-4938-8f1a-2dc6e556c212] Pending
helpers_test.go:344: "task-pv-pod" [f54f955a-abea-4938-8f1a-2dc6e556c212] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f54f955a-abea-4938-8f1a-2dc6e556c212] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003502229s
addons_test.go:511: (dbg) Run:  kubectl --context addons-652362 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-652362 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-652362 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-652362 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-652362 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-652362 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-652362 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0078b93d-527c-412a-b037-c1e45c00e941] Pending
helpers_test.go:344: "task-pv-pod-restore" [0078b93d-527c-412a-b037-c1e45c00e941] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0078b93d-527c-412a-b037-c1e45c00e941] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002565774s
addons_test.go:553: (dbg) Run:  kubectl --context addons-652362 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-652362 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-652362 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-652362 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.508923599s)
--- PASS: TestAddons/parallel/CSI (64.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-652362 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-tx5cp" [8da9f19d-f100-434a-be6f-09931da4a8b3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-tx5cp" [8da9f19d-f100-434a-be6f-09931da4a8b3] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003551838s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-652362 addons disable headlamp --alsologtostderr -v=1: (5.647078797s)
--- PASS: TestAddons/parallel/Headlamp (18.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-x2d87" [b58a543e-f951-4f95-a211-588419dfb4fb] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00320935s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.98s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-652362 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-652362 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-652362 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [39ac6e0d-e4f0-43cc-ab61-c3111acaece6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [39ac6e0d-e4f0-43cc-ab61-c3111acaece6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [39ac6e0d-e4f0-43cc-ab61-c3111acaece6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.002697411s
addons_test.go:906: (dbg) Run:  kubectl --context addons-652362 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 ssh "cat /opt/local-path-provisioner/pvc-403aa265-1104-4ee7-870b-3c3f736ca8be_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-652362 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-652362 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-652362 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.069437314s)
--- PASS: TestAddons/parallel/LocalPath (54.98s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-hdmx2" [daa5b722-96f2-4bec-b731-9603806ec3fa] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.002917937s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-qffll" [8136eb92-c82c-4207-9697-891842d2b85b] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002662892s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-652362 addons disable yakd --alsologtostderr -v=1: (5.889502076s)
--- PASS: TestAddons/parallel/Yakd (11.89s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-nxm8m" [1d468a5e-64fc-49d1-8894-a802a0e9ebca] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003135956s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.09s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-652362
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-652362: (11.841488458s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-652362
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-652362
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-652362
--- PASS: TestAddons/StoppedEnableDisable (12.09s)

                                                
                                    
x
+
TestCertOptions (31.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-884935 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-884935 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.662024951s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-884935 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-884935 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-884935 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-884935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-884935
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-884935: (4.055424371s)
--- PASS: TestCertOptions (31.33s)

                                                
                                    
x
+
TestCertExpiration (225.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-759061 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-759061 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.542898397s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-759061 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-759061 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.739020549s)
helpers_test.go:175: Cleaning up "cert-expiration-759061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-759061
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-759061: (2.79876337s)
--- PASS: TestCertExpiration (225.08s)

                                                
                                    
x
+
TestForceSystemdFlag (26.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-724155 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-724155 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.230716111s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-724155 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-724155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-724155
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-724155: (3.574720858s)
--- PASS: TestForceSystemdFlag (26.10s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.23s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0211 02:45:17.134732   19028 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0211 02:45:17.134900   19028 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0211 02:45:17.163775   19028 install.go:62] docker-machine-driver-kvm2: exit status 1
W0211 02:45:17.164074   19028 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0211 02:45:17.164164   19028 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3133450656/001/docker-machine-driver-kvm2
I0211 02:45:17.403957   19028 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3133450656/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5494820 0x5494820 0x5494820 0x5494820 0x5494820 0x5494820 0x5494820] Decompressors:map[bz2:0xc00080bba8 gz:0xc00080bc30 tar:0xc00080bbe0 tar.bz2:0xc00080bbf0 tar.gz:0xc00080bc00 tar.xz:0xc00080bc10 tar.zst:0xc00080bc20 tbz2:0xc00080bbf0 tgz:0xc00080bc00 txz:0xc00080bc10 tzst:0xc00080bc20 xz:0xc00080bc38 zip:0xc00080bc40 zst:0xc00080bc50] Getters:map[file:0xc00144b330 http:0xc0005fad20 https:0xc0005fad70] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0211 02:45:17.404006   19028 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3133450656/001/docker-machine-driver-kvm2
I0211 02:45:18.899498   19028 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0211 02:45:18.899584   19028 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0211 02:45:18.927024   19028 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0211 02:45:18.927064   19028 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0211 02:45:18.927159   19028 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0211 02:45:18.927193   19028 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3133450656/002/docker-machine-driver-kvm2
I0211 02:45:19.087531   19028 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3133450656/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5494820 0x5494820 0x5494820 0x5494820 0x5494820 0x5494820 0x5494820] Decompressors:map[bz2:0xc00080bba8 gz:0xc00080bc30 tar:0xc00080bbe0 tar.bz2:0xc00080bbf0 tar.gz:0xc00080bc00 tar.xz:0xc00080bc10 tar.zst:0xc00080bc20 tbz2:0xc00080bbf0 tgz:0xc00080bc00 txz:0xc00080bc10 tzst:0xc00080bc20 xz:0xc00080bc38 zip:0xc00080bc40 zst:0xc00080bc50] Getters:map[file:0xc00144ac10 http:0xc0008b4e60 https:0xc0008b4eb0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0211 02:45:19.087572   19028 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3133450656/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.23s)

                                                
                                    
x
+
TestErrorSpam/setup (20.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-333851 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-333851 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-333851 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-333851 --driver=docker  --container-runtime=crio: (20.602421801s)
--- PASS: TestErrorSpam/setup (20.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 start --dry-run
--- PASS: TestErrorSpam/start (0.58s)

                                                
                                    
x
+
TestErrorSpam/status (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 status
--- PASS: TestErrorSpam/status (0.87s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 stop: (1.175622916s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-333851 --log_dir /tmp/nospam-333851 stop
--- PASS: TestErrorSpam/stop (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20400-12240/.minikube/files/etc/test/nested/copy/19028/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-149709 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-149709 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (43.675510858s)
--- PASS: TestFunctional/serial/StartWithProxy (43.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.52s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0211 02:08:34.615891   19028 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-149709 --alsologtostderr -v=8
E0211 02:08:53.188999   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:08:53.195494   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:08:53.206898   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:08:53.228350   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:08:53.270543   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:08:53.352401   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:08:53.514073   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:08:53.835655   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:08:54.477021   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:08:55.758562   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:08:58.319980   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:09:03.441711   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-149709 --alsologtostderr -v=8: (35.521444831s)
functional_test.go:680: soft start took 35.524625796s for "functional-149709" cluster.
I0211 02:09:10.140094   19028 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (35.52s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-149709 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-149709 cache add registry.k8s.io/pause:3.3: (1.107987443s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-149709 /tmp/TestFunctionalserialCacheCmdcacheadd_local1713812611/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 cache add minikube-local-cache-test:functional-149709
E0211 02:09:13.683310   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 cache delete minikube-local-cache-test:functional-149709
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-149709
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-149709 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (263.847267ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 kubectl -- --context functional-149709 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-149709 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-149709 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0211 02:09:34.165350   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-149709 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.238943793s)
functional_test.go:778: restart took 39.239058091s for "functional-149709" cluster.
I0211 02:09:56.240980   19028 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (39.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-149709 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-149709 logs: (1.341473558s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 logs --file /tmp/TestFunctionalserialLogsFileCmd4091121400/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-149709 logs --file /tmp/TestFunctionalserialLogsFileCmd4091121400/001/logs.txt: (1.36842354s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.08s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-149709 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-149709
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-149709: exit status 115 (318.447904ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30164 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-149709 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-149709 config get cpus: exit status 14 (91.384947ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-149709 config get cpus: exit status 14 (62.102943ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-149709 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-149709 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 55493: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-149709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-149709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (172.226431ms)

                                                
                                                
-- stdout --
	* [functional-149709] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:10:05.142600   54190 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:10:05.142703   54190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:10:05.142714   54190 out.go:358] Setting ErrFile to fd 2...
	I0211 02:10:05.142720   54190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:10:05.142952   54190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	I0211 02:10:05.143691   54190 out.go:352] Setting JSON to false
	I0211 02:10:05.144964   54190 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3154,"bootTime":1739236651,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:10:05.145065   54190 start.go:139] virtualization: kvm guest
	I0211 02:10:05.147364   54190 out.go:177] * [functional-149709] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 02:10:05.148873   54190 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 02:10:05.148892   54190 notify.go:220] Checking for updates...
	I0211 02:10:05.150509   54190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:10:05.152024   54190 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	I0211 02:10:05.153533   54190 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	I0211 02:10:05.154830   54190 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 02:10:05.156124   54190 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 02:10:05.158127   54190 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:10:05.158857   54190 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:10:05.185588   54190 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0211 02:10:05.185681   54190 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:10:05.246466   54190 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-11 02:10:05.230218105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:10:05.246571   54190 docker.go:318] overlay module found
	I0211 02:10:05.249994   54190 out.go:177] * Using the docker driver based on existing profile
	I0211 02:10:05.251446   54190 start.go:297] selected driver: docker
	I0211 02:10:05.251476   54190 start.go:901] validating driver "docker" against &{Name:functional-149709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-149709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:10:05.251609   54190 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 02:10:05.254023   54190 out.go:201] 
	W0211 02:10:05.255224   54190 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0211 02:10:05.256507   54190 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-149709 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-149709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-149709 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (197.926904ms)

                                                
                                                
-- stdout --
	* [functional-149709] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:10:05.292298   54262 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:10:05.292512   54262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:10:05.292542   54262 out.go:358] Setting ErrFile to fd 2...
	I0211 02:10:05.292558   54262 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:10:05.292928   54262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	I0211 02:10:05.293538   54262 out.go:352] Setting JSON to false
	I0211 02:10:05.294487   54262 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3154,"bootTime":1739236651,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:10:05.294612   54262 start.go:139] virtualization: kvm guest
	I0211 02:10:05.297046   54262 out.go:177] * [functional-149709] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0211 02:10:05.298626   54262 notify.go:220] Checking for updates...
	I0211 02:10:05.298648   54262 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 02:10:05.300253   54262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:10:05.301510   54262 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	I0211 02:10:05.302800   54262 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	I0211 02:10:05.304035   54262 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 02:10:05.305313   54262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 02:10:05.307212   54262 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:10:05.307858   54262 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:10:05.349332   54262 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0211 02:10:05.349478   54262 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:10:05.413532   54262 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-11 02:10:05.40286377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:10:05.413681   54262 docker.go:318] overlay module found
	I0211 02:10:05.416347   54262 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0211 02:10:05.417751   54262 start.go:297] selected driver: docker
	I0211 02:10:05.417781   54262 start.go:901] validating driver "docker" against &{Name:functional-149709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-149709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:10:05.417911   54262 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 02:10:05.420732   54262 out.go:201] 
	W0211 02:10:05.422136   54262 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0211 02:10:05.423576   54262 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-149709 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-149709 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-bvxj9" [c3610708-8013-4aaf-8ebf-9ec28dc85f3a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-bvxj9" [c3610708-8013-4aaf-8ebf-9ec28dc85f3a] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003705999s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:31818
functional_test.go:1692: http://192.168.49.2:31818: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-bvxj9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31818
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 addons list
E0211 02:10:15.127689   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh -n functional-149709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 cp functional-149709:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4279706360/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh -n functional-149709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh -n functional-149709 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/19028/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "sudo cat /etc/test/nested/copy/19028/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/19028.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "sudo cat /etc/ssl/certs/19028.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/19028.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "sudo cat /usr/share/ca-certificates/19028.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/190282.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "sudo cat /etc/ssl/certs/190282.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/190282.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "sudo cat /usr/share/ca-certificates/190282.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-149709 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-149709 ssh "sudo systemctl is-active docker": exit status 1 (306.189213ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-149709 ssh "sudo systemctl is-active containerd": exit status 1 (302.308864ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-149709 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-149709 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-fn66v" [33e369db-424f-4fab-998e-b4c973c4a077] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-fn66v" [33e369db-424f-4fab-998e-b4c973c4a077] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003991628s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "423.636864ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "66.988837ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-149709 /tmp/TestFunctionalparallelMountCmdany-port642484264/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1739239804325236574" to /tmp/TestFunctionalparallelMountCmdany-port642484264/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1739239804325236574" to /tmp/TestFunctionalparallelMountCmdany-port642484264/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1739239804325236574" to /tmp/TestFunctionalparallelMountCmdany-port642484264/001/test-1739239804325236574
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-149709 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (402.437396ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0211 02:10:04.727995   19028 retry.go:31] will retry after 652.035836ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 11 02:10 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 11 02:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 11 02:10 test-1739239804325236574
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh cat /mount-9p/test-1739239804325236574
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-149709 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5bcc0070-70ee-4353-99aa-42c18aa1a6ec] Pending
helpers_test.go:344: "busybox-mount" [5bcc0070-70ee-4353-99aa-42c18aa1a6ec] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5bcc0070-70ee-4353-99aa-42c18aa1a6ec] Running
helpers_test.go:344: "busybox-mount" [5bcc0070-70ee-4353-99aa-42c18aa1a6ec] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5bcc0070-70ee-4353-99aa-42c18aa1a6ec] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004651333s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-149709 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-149709 /tmp/TestFunctionalparallelMountCmdany-port642484264/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "596.71815ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "58.490149ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-149709 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-149709
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20241212-9f82dd49
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-149709 image ls --format short --alsologtostderr:
I0211 02:10:23.878620   59627 out.go:345] Setting OutFile to fd 1 ...
I0211 02:10:23.878736   59627 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:10:23.878746   59627 out.go:358] Setting ErrFile to fd 2...
I0211 02:10:23.878750   59627 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:10:23.878919   59627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
I0211 02:10:23.879478   59627 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:10:23.879575   59627 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:10:23.879927   59627 cli_runner.go:164] Run: docker container inspect functional-149709 --format={{.State.Status}}
I0211 02:10:23.897256   59627 ssh_runner.go:195] Run: systemctl --version
I0211 02:10:23.897302   59627 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-149709
I0211 02:10:23.914594   59627 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/functional-149709/id_rsa Username:docker}
I0211 02:10:24.004596   59627 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-149709 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| localhost/my-image                      | functional-149709  | e8026f85238a0 | 1.47MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| localhost/minikube-local-cache-test     | functional-149709  | af0985830212a | 3.33kB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-149709 image ls --format table --alsologtostderr:
I0211 02:10:26.555051   60214 out.go:345] Setting OutFile to fd 1 ...
I0211 02:10:26.555166   60214 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:10:26.555175   60214 out.go:358] Setting ErrFile to fd 2...
I0211 02:10:26.555179   60214 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:10:26.555360   60214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
I0211 02:10:26.555936   60214 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:10:26.556035   60214 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:10:26.556419   60214 cli_runner.go:164] Run: docker container inspect functional-149709 --format={{.State.Status}}
I0211 02:10:26.573957   60214 ssh_runner.go:195] Run: systemctl --version
I0211 02:10:26.574009   60214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-149709
I0211 02:10:26.590978   60214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/functional-149709/id_rsa Username:docker}
I0211 02:10:26.680407   60214 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-149709 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"af0985830212ad2c284fd7450f549059b1705793e7f0c0e244fe09e36686ec03","repoDigests":["localhost/minikube-local-cache-test@sha256:37a304168f1d5eeb9af927190198e51fce89051713e403c500a6eb34eb8ea9df"],"repoTags":["localhost/minikube-local-cache-test:functional-149709"],"size":"3330"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"7064
9158"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261
103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"8eed33b8f36969de0523170df1c56625ea5e9dd00e460cb04ba4787249949976","repoDigests":["docker.io/library/5f610a3a002afeab273a6f81195b9687f9beec4f028549e7af0b1bd0b45a6ad4-tmp@sha256:95c225ef70283c071f1299f84a3ea6a12e3b71d45d5fac76fc56d7b6e6ec1908"],"repoTags":[],"size":"1465608"},{"id":"e8026f85238a09794744d782f93782d1890a4533601bde61f82b000f64d4a23a","repoDigests":["localhost/my-image@sha256:e88837f93627353e0376ce9fcac9a272f210daec799a18cf1de21619a88cd398"],"repoTags":["localhost/my-image:functional-149709"],"size":"1468192"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@
sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c19136
1723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v202
41108-5c6d2daf"],"size":"94963761"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kub
e-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-149709 image ls --format json --alsologtostderr:
I0211 02:10:26.347281   60163 out.go:345] Setting OutFile to fd 1 ...
I0211 02:10:26.347391   60163 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:10:26.347400   60163 out.go:358] Setting ErrFile to fd 2...
I0211 02:10:26.347405   60163 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:10:26.347598   60163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
I0211 02:10:26.348236   60163 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:10:26.348344   60163 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:10:26.348716   60163 cli_runner.go:164] Run: docker container inspect functional-149709 --format={{.State.Status}}
I0211 02:10:26.367163   60163 ssh_runner.go:195] Run: systemctl --version
I0211 02:10:26.367219   60163 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-149709
I0211 02:10:26.385382   60163 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/functional-149709/id_rsa Username:docker}
I0211 02:10:26.472393   60163 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-149709 image ls --format yaml --alsologtostderr:
- id: e8026f85238a09794744d782f93782d1890a4533601bde61f82b000f64d4a23a
repoDigests:
- localhost/my-image@sha256:e88837f93627353e0376ce9fcac9a272f210daec799a18cf1de21619a88cd398
repoTags:
- localhost/my-image:functional-149709
size: "1468192"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 8eed33b8f36969de0523170df1c56625ea5e9dd00e460cb04ba4787249949976
repoDigests:
- docker.io/library/5f610a3a002afeab273a6f81195b9687f9beec4f028549e7af0b1bd0b45a6ad4-tmp@sha256:95c225ef70283c071f1299f84a3ea6a12e3b71d45d5fac76fc56d7b6e6ec1908
repoTags: []
size: "1465608"
- id: af0985830212ad2c284fd7450f549059b1705793e7f0c0e244fe09e36686ec03
repoDigests:
- localhost/minikube-local-cache-test@sha256:37a304168f1d5eeb9af927190198e51fce89051713e403c500a6eb34eb8ea9df
repoTags:
- localhost/minikube-local-cache-test:functional-149709
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1462480"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-149709 image ls --format yaml --alsologtostderr:
I0211 02:10:26.133571   60112 out.go:345] Setting OutFile to fd 1 ...
I0211 02:10:26.133713   60112 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:10:26.133724   60112 out.go:358] Setting ErrFile to fd 2...
I0211 02:10:26.133728   60112 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:10:26.133915   60112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
I0211 02:10:26.134540   60112 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:10:26.134657   60112 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:10:26.135042   60112 cli_runner.go:164] Run: docker container inspect functional-149709 --format={{.State.Status}}
I0211 02:10:26.153208   60112 ssh_runner.go:195] Run: systemctl --version
I0211 02:10:26.153269   60112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-149709
I0211 02:10:26.169946   60112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/functional-149709/id_rsa Username:docker}
I0211 02:10:26.260456   60112 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-149709 ssh pgrep buildkitd: exit status 1 (237.802955ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 image build -t localhost/my-image:functional-149709 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-149709 image build -t localhost/my-image:functional-149709 testdata/build --alsologtostderr: (1.591076607s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-149709 image build -t localhost/my-image:functional-149709 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8eed33b8f36
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-149709
--> e8026f85238
Successfully tagged localhost/my-image:functional-149709
e8026f85238a09794744d782f93782d1890a4533601bde61f82b000f64d4a23a
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-149709 image build -t localhost/my-image:functional-149709 testdata/build --alsologtostderr:
I0211 02:10:24.326671   59775 out.go:345] Setting OutFile to fd 1 ...
I0211 02:10:24.326846   59775 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:10:24.326856   59775 out.go:358] Setting ErrFile to fd 2...
I0211 02:10:24.326860   59775 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:10:24.327046   59775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
I0211 02:10:24.327602   59775 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:10:24.328194   59775 config.go:182] Loaded profile config "functional-149709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:10:24.328586   59775 cli_runner.go:164] Run: docker container inspect functional-149709 --format={{.State.Status}}
I0211 02:10:24.346766   59775 ssh_runner.go:195] Run: systemctl --version
I0211 02:10:24.346811   59775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-149709
I0211 02:10:24.364431   59775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/functional-149709/id_rsa Username:docker}
I0211 02:10:24.456474   59775 build_images.go:161] Building image from path: /tmp/build.3139059572.tar
I0211 02:10:24.456547   59775 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0211 02:10:24.464825   59775 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3139059572.tar
I0211 02:10:24.467978   59775 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3139059572.tar: stat -c "%s %y" /var/lib/minikube/build/build.3139059572.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3139059572.tar': No such file or directory
I0211 02:10:24.468007   59775 ssh_runner.go:362] scp /tmp/build.3139059572.tar --> /var/lib/minikube/build/build.3139059572.tar (3072 bytes)
I0211 02:10:24.489959   59775 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3139059572
I0211 02:10:24.498867   59775 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3139059572 -xf /var/lib/minikube/build/build.3139059572.tar
I0211 02:10:24.507252   59775 crio.go:315] Building image: /var/lib/minikube/build/build.3139059572
I0211 02:10:24.507326   59775 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-149709 /var/lib/minikube/build/build.3139059572 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0211 02:10:25.851010   59775 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-149709 /var/lib/minikube/build/build.3139059572 --cgroup-manager=cgroupfs: (1.343657213s)
I0211 02:10:25.851076   59775 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3139059572
I0211 02:10:25.859886   59775 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3139059572.tar
I0211 02:10:25.868283   59775 build_images.go:217] Built localhost/my-image:functional-149709 from /tmp/build.3139059572.tar
I0211 02:10:25.868315   59775 build_images.go:133] succeeded building to: functional-149709
I0211 02:10:25.868320   59775 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 image rm kicbase/echo-server:functional-149709 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-149709 /tmp/TestFunctionalparallelMountCmdspecific-port2827270516/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-149709 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (287.864106ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0211 02:10:11.826561   19028 retry.go:31] will retry after 296.086449ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-149709 /tmp/TestFunctionalparallelMountCmdspecific-port2827270516/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-149709 ssh "sudo umount -f /mount-9p": exit status 1 (253.530742ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-149709 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-149709 /tmp/TestFunctionalparallelMountCmdspecific-port2827270516/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-149709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3719405263/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-149709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3719405263/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-149709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3719405263/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-149709 ssh "findmnt -T" /mount1: exit status 1 (327.605617ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0211 02:10:13.477304   19028 retry.go:31] will retry after 597.418686ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-149709 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-149709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3719405263/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-149709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3719405263/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-149709 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3719405263/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 service list -o json
2025/02/11 02:10:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1511: Took "505.70131ms" to run "out/minikube-linux-amd64 -p functional-149709 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:30578
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-149709 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-149709 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-149709 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 57778: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-149709 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-149709 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:30578
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 update-context --alsologtostderr -v=2
E0211 02:11:37.050059   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-149709 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-149709 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
E0211 02:18:53.188947   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-149709
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-149709
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-149709
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (102.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-791178 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-791178 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m41.45189777s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (102.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-791178 -- rollout status deployment/busybox: (2.466955945s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-bxk2l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-fmcxk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-klpn4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-bxk2l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-fmcxk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-klpn4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-bxk2l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-fmcxk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-klpn4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-bxk2l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-bxk2l -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-fmcxk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-fmcxk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-klpn4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-791178 -- exec busybox-58667487b6-klpn4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (33.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-791178 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-791178 -v=7 --alsologtostderr: (32.315402462s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (33.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-791178 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp testdata/cp-test.txt ha-791178:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3535641814/001/cp-test_ha-791178.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178:/home/docker/cp-test.txt ha-791178-m02:/home/docker/cp-test_ha-791178_ha-791178-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m02 "sudo cat /home/docker/cp-test_ha-791178_ha-791178-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178:/home/docker/cp-test.txt ha-791178-m03:/home/docker/cp-test_ha-791178_ha-791178-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m03 "sudo cat /home/docker/cp-test_ha-791178_ha-791178-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178:/home/docker/cp-test.txt ha-791178-m04:/home/docker/cp-test_ha-791178_ha-791178-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m04 "sudo cat /home/docker/cp-test_ha-791178_ha-791178-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp testdata/cp-test.txt ha-791178-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3535641814/001/cp-test_ha-791178-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178-m02:/home/docker/cp-test.txt ha-791178:/home/docker/cp-test_ha-791178-m02_ha-791178.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178 "sudo cat /home/docker/cp-test_ha-791178-m02_ha-791178.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178-m02:/home/docker/cp-test.txt ha-791178-m03:/home/docker/cp-test_ha-791178-m02_ha-791178-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m03 "sudo cat /home/docker/cp-test_ha-791178-m02_ha-791178-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178-m02:/home/docker/cp-test.txt ha-791178-m04:/home/docker/cp-test_ha-791178-m02_ha-791178-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m04 "sudo cat /home/docker/cp-test_ha-791178-m02_ha-791178-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp testdata/cp-test.txt ha-791178-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3535641814/001/cp-test_ha-791178-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178-m03:/home/docker/cp-test.txt ha-791178:/home/docker/cp-test_ha-791178-m03_ha-791178.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178 "sudo cat /home/docker/cp-test_ha-791178-m03_ha-791178.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178-m03:/home/docker/cp-test.txt ha-791178-m02:/home/docker/cp-test_ha-791178-m03_ha-791178-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m02 "sudo cat /home/docker/cp-test_ha-791178-m03_ha-791178-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178-m03:/home/docker/cp-test.txt ha-791178-m04:/home/docker/cp-test_ha-791178-m03_ha-791178-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m04 "sudo cat /home/docker/cp-test_ha-791178-m03_ha-791178-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp testdata/cp-test.txt ha-791178-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3535641814/001/cp-test_ha-791178-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178-m04:/home/docker/cp-test.txt ha-791178:/home/docker/cp-test_ha-791178-m04_ha-791178.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178 "sudo cat /home/docker/cp-test_ha-791178-m04_ha-791178.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178-m04:/home/docker/cp-test.txt ha-791178-m02:/home/docker/cp-test_ha-791178-m04_ha-791178-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m02 "sudo cat /home/docker/cp-test_ha-791178-m04_ha-791178-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 cp ha-791178-m04:/home/docker/cp-test.txt ha-791178-m03:/home/docker/cp-test_ha-791178-m04_ha-791178-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 ssh -n ha-791178-m03 "sudo cat /home/docker/cp-test_ha-791178-m04_ha-791178-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-791178 node stop m02 -v=7 --alsologtostderr: (11.8279117s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-791178 status -v=7 --alsologtostderr: exit status 7 (651.197135ms)

                                                
                                                
-- stdout --
	ha-791178
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-791178-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-791178-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-791178-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:23:11.900765   86109 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:23:11.901063   86109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:23:11.901074   86109 out.go:358] Setting ErrFile to fd 2...
	I0211 02:23:11.901078   86109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:23:11.901274   86109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	I0211 02:23:11.901427   86109 out.go:352] Setting JSON to false
	I0211 02:23:11.901455   86109 mustload.go:65] Loading cluster: ha-791178
	I0211 02:23:11.901561   86109 notify.go:220] Checking for updates...
	I0211 02:23:11.901859   86109 config.go:182] Loaded profile config "ha-791178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:23:11.901879   86109 status.go:174] checking status of ha-791178 ...
	I0211 02:23:11.902270   86109 cli_runner.go:164] Run: docker container inspect ha-791178 --format={{.State.Status}}
	I0211 02:23:11.919874   86109 status.go:371] ha-791178 host status = "Running" (err=<nil>)
	I0211 02:23:11.919897   86109 host.go:66] Checking if "ha-791178" exists ...
	I0211 02:23:11.920169   86109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791178
	I0211 02:23:11.938182   86109 host.go:66] Checking if "ha-791178" exists ...
	I0211 02:23:11.938421   86109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0211 02:23:11.938458   86109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791178
	I0211 02:23:11.957784   86109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/ha-791178/id_rsa Username:docker}
	I0211 02:23:12.049346   86109 ssh_runner.go:195] Run: systemctl --version
	I0211 02:23:12.053475   86109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 02:23:12.065046   86109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:23:12.113563   86109 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:74 SystemTime:2025-02-11 02:23:12.103858774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:23:12.114319   86109 kubeconfig.go:125] found "ha-791178" server: "https://192.168.49.254:8443"
	I0211 02:23:12.114362   86109 api_server.go:166] Checking apiserver status ...
	I0211 02:23:12.114402   86109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 02:23:12.125041   86109 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1546/cgroup
	I0211 02:23:12.133687   86109 api_server.go:182] apiserver freezer: "12:freezer:/docker/28156a51ffe69d12192c1f64eca4a5c9609a952116886b42689cc787b16729d5/crio/crio-82e7b4f6121f1d417f790773b8d5d09e559aaecf875a7b7b926275e0f84e0f60"
	I0211 02:23:12.133750   86109 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/28156a51ffe69d12192c1f64eca4a5c9609a952116886b42689cc787b16729d5/crio/crio-82e7b4f6121f1d417f790773b8d5d09e559aaecf875a7b7b926275e0f84e0f60/freezer.state
	I0211 02:23:12.142069   86109 api_server.go:204] freezer state: "THAWED"
	I0211 02:23:12.142112   86109 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0211 02:23:12.146280   86109 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0211 02:23:12.146305   86109 status.go:463] ha-791178 apiserver status = Running (err=<nil>)
	I0211 02:23:12.146315   86109 status.go:176] ha-791178 status: &{Name:ha-791178 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:23:12.146336   86109 status.go:174] checking status of ha-791178-m02 ...
	I0211 02:23:12.146581   86109 cli_runner.go:164] Run: docker container inspect ha-791178-m02 --format={{.State.Status}}
	I0211 02:23:12.166432   86109 status.go:371] ha-791178-m02 host status = "Stopped" (err=<nil>)
	I0211 02:23:12.166453   86109 status.go:384] host is not running, skipping remaining checks
	I0211 02:23:12.166458   86109 status.go:176] ha-791178-m02 status: &{Name:ha-791178-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:23:12.166476   86109 status.go:174] checking status of ha-791178-m03 ...
	I0211 02:23:12.166697   86109 cli_runner.go:164] Run: docker container inspect ha-791178-m03 --format={{.State.Status}}
	I0211 02:23:12.184430   86109 status.go:371] ha-791178-m03 host status = "Running" (err=<nil>)
	I0211 02:23:12.184464   86109 host.go:66] Checking if "ha-791178-m03" exists ...
	I0211 02:23:12.184703   86109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791178-m03
	I0211 02:23:12.202711   86109 host.go:66] Checking if "ha-791178-m03" exists ...
	I0211 02:23:12.203125   86109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0211 02:23:12.203171   86109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791178-m03
	I0211 02:23:12.220379   86109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/ha-791178-m03/id_rsa Username:docker}
	I0211 02:23:12.308972   86109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 02:23:12.319652   86109 kubeconfig.go:125] found "ha-791178" server: "https://192.168.49.254:8443"
	I0211 02:23:12.319679   86109 api_server.go:166] Checking apiserver status ...
	I0211 02:23:12.319721   86109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 02:23:12.330261   86109 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1446/cgroup
	I0211 02:23:12.339479   86109 api_server.go:182] apiserver freezer: "12:freezer:/docker/78bfe3ba89b6dd5b4f0819e2a8e073448ac0e940849090a51535a9b38e49a2eb/crio/crio-f82b7691632b168225d16649866841b8a331971439fdabea123c1489a4e6571f"
	I0211 02:23:12.339546   86109 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/78bfe3ba89b6dd5b4f0819e2a8e073448ac0e940849090a51535a9b38e49a2eb/crio/crio-f82b7691632b168225d16649866841b8a331971439fdabea123c1489a4e6571f/freezer.state
	I0211 02:23:12.347525   86109 api_server.go:204] freezer state: "THAWED"
	I0211 02:23:12.347554   86109 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0211 02:23:12.352170   86109 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0211 02:23:12.352204   86109 status.go:463] ha-791178-m03 apiserver status = Running (err=<nil>)
	I0211 02:23:12.352219   86109 status.go:176] ha-791178-m03 status: &{Name:ha-791178-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:23:12.352251   86109 status.go:174] checking status of ha-791178-m04 ...
	I0211 02:23:12.352555   86109 cli_runner.go:164] Run: docker container inspect ha-791178-m04 --format={{.State.Status}}
	I0211 02:23:12.370385   86109 status.go:371] ha-791178-m04 host status = "Running" (err=<nil>)
	I0211 02:23:12.370407   86109 host.go:66] Checking if "ha-791178-m04" exists ...
	I0211 02:23:12.370655   86109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-791178-m04
	I0211 02:23:12.388461   86109 host.go:66] Checking if "ha-791178-m04" exists ...
	I0211 02:23:12.388699   86109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0211 02:23:12.388744   86109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-791178-m04
	I0211 02:23:12.405744   86109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/ha-791178-m04/id_rsa Username:docker}
	I0211 02:23:12.492888   86109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 02:23:12.503840   86109 status.go:176] ha-791178-m04 status: &{Name:ha-791178-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 node start m02 -v=7 --alsologtostderr
E0211 02:23:53.188727   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-791178 node start m02 -v=7 --alsologtostderr: (40.762802407s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (41.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (140.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-791178 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-791178 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-791178 -v=7 --alsologtostderr: (36.549108257s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-791178 --wait=true -v=7 --alsologtostderr
E0211 02:25:03.309634   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:25:03.316048   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:25:03.327458   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:25:03.348923   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:25:03.390346   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:25:03.471838   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:25:03.633330   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:25:03.954869   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:25:04.597098   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:25:05.878685   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:25:08.440254   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:25:13.562490   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:25:16.253262   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:25:23.803972   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:25:44.285755   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-791178 --wait=true -v=7 --alsologtostderr: (1m43.824774443s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-791178
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (140.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 node delete m03 -v=7 --alsologtostderr
E0211 02:26:25.247344   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-791178 node delete m03 -v=7 --alsologtostderr: (10.560687797s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-791178 stop -v=7 --alsologtostderr: (35.540447668s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-791178 status -v=7 --alsologtostderr: exit status 7 (103.668874ms)

                                                
                                                
-- stdout --
	ha-791178
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-791178-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-791178-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:27:03.722723  103047 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:27:03.722994  103047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:27:03.723005  103047 out.go:358] Setting ErrFile to fd 2...
	I0211 02:27:03.723009  103047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:27:03.723188  103047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	I0211 02:27:03.723357  103047 out.go:352] Setting JSON to false
	I0211 02:27:03.723386  103047 mustload.go:65] Loading cluster: ha-791178
	I0211 02:27:03.723495  103047 notify.go:220] Checking for updates...
	I0211 02:27:03.723793  103047 config.go:182] Loaded profile config "ha-791178": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:27:03.723810  103047 status.go:174] checking status of ha-791178 ...
	I0211 02:27:03.724246  103047 cli_runner.go:164] Run: docker container inspect ha-791178 --format={{.State.Status}}
	I0211 02:27:03.744715  103047 status.go:371] ha-791178 host status = "Stopped" (err=<nil>)
	I0211 02:27:03.744753  103047 status.go:384] host is not running, skipping remaining checks
	I0211 02:27:03.744760  103047 status.go:176] ha-791178 status: &{Name:ha-791178 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:27:03.744800  103047 status.go:174] checking status of ha-791178-m02 ...
	I0211 02:27:03.745143  103047 cli_runner.go:164] Run: docker container inspect ha-791178-m02 --format={{.State.Status}}
	I0211 02:27:03.762583  103047 status.go:371] ha-791178-m02 host status = "Stopped" (err=<nil>)
	I0211 02:27:03.762603  103047 status.go:384] host is not running, skipping remaining checks
	I0211 02:27:03.762609  103047 status.go:176] ha-791178-m02 status: &{Name:ha-791178-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:27:03.762630  103047 status.go:174] checking status of ha-791178-m04 ...
	I0211 02:27:03.762898  103047 cli_runner.go:164] Run: docker container inspect ha-791178-m04 --format={{.State.Status}}
	I0211 02:27:03.779840  103047 status.go:371] ha-791178-m04 host status = "Stopped" (err=<nil>)
	I0211 02:27:03.779876  103047 status.go:384] host is not running, skipping remaining checks
	I0211 02:27:03.779887  103047 status.go:176] ha-791178-m04 status: &{Name:ha-791178-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (66.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-791178 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0211 02:27:47.169345   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-791178 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m5.517328378s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (66.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-791178 --control-plane -v=7 --alsologtostderr
E0211 02:28:53.188200   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-791178 --control-plane -v=7 --alsologtostderr: (44.540607318s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-791178 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-944958 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-944958 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (43.125932946s)
--- PASS: TestJSONOutput/start/Command (43.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-944958 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-944958 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-944958 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-944958 --output=json --user=testUser: (5.762763315s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-470328 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-470328 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.201476ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2881e97a-52a8-4408-841e-fd58b243173f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-470328] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b10d9da6-70b7-4757-9501-983f15ad9111","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20400"}}
	{"specversion":"1.0","id":"c4693f4b-43e8-434a-9c71-b7a0ad738d11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2cfd1768-af3f-4e40-a758-28ddf028898e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig"}}
	{"specversion":"1.0","id":"632143a1-23e5-4a3e-b859-e6df9b53cd4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube"}}
	{"specversion":"1.0","id":"d63790aa-316d-43b4-af52-6aece6cc004e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c45546f3-51f9-4662-8083-d712aeb37670","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"19893327-c6ce-4891-bea7-b7b9273041cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-470328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-470328
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-770161 --network=
E0211 02:30:03.309184   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-770161 --network=: (26.323300039s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-770161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-770161
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-770161: (2.083094128s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.43s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-595383 --network=bridge
E0211 02:30:31.013930   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-595383 --network=bridge: (20.758749004s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-595383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-595383
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-595383: (1.92001588s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.70s)

                                                
                                    
x
+
TestKicExistingNetwork (22.35s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0211 02:30:50.126865   19028 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0211 02:30:50.144727   19028 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0211 02:30:50.144793   19028 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0211 02:30:50.144810   19028 cli_runner.go:164] Run: docker network inspect existing-network
W0211 02:30:50.161729   19028 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0211 02:30:50.161765   19028 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0211 02:30:50.161799   19028 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0211 02:30:50.161982   19028 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0211 02:30:50.178633   19028 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-370a375e9ac7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:3f:cb:fd:76} reservation:<nil>}
I0211 02:30:50.179181   19028 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015fc0}
I0211 02:30:50.179218   19028 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0211 02:30:50.179277   19028 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0211 02:30:50.239729   19028 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-748863 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-748863 --network=existing-network: (20.361935341s)
helpers_test.go:175: Cleaning up "existing-network-748863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-748863
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-748863: (1.840970345s)
I0211 02:31:12.460285   19028 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.35s)

                                                
                                    
x
+
TestKicCustomSubnet (26.39s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-670382 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-670382 --subnet=192.168.60.0/24: (24.334670298s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-670382 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-670382" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-670382
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-670382: (2.033626055s)
--- PASS: TestKicCustomSubnet (26.39s)

                                                
                                    
x
+
TestKicStaticIP (24.1s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-046456 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-046456 --static-ip=192.168.200.200: (21.967674252s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-046456 ip
helpers_test.go:175: Cleaning up "static-ip-046456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-046456
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-046456: (2.003373656s)
--- PASS: TestKicStaticIP (24.10s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (49.99s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-122806 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-122806 --driver=docker  --container-runtime=crio: (23.576492858s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-139695 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-139695 --driver=docker  --container-runtime=crio: (21.210941986s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-122806
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-139695
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-139695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-139695
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-139695: (1.86425299s)
helpers_test.go:175: Cleaning up "first-122806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-122806
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-122806: (2.195310455s)
--- PASS: TestMinikubeProfile (49.99s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-422841 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-422841 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.452562312s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-422841 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-435857 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-435857 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.175184847s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-435857 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-422841 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-422841 --alsologtostderr -v=5: (1.619353844s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-435857 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-435857
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-435857: (1.176020034s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.26s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-435857
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-435857: (6.259553841s)
--- PASS: TestMountStart/serial/RestartStopped (7.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-435857 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-911102 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0211 02:33:53.187967   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-911102 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m9.161999232s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-911102 -- rollout status deployment/busybox: (2.214658174s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- exec busybox-58667487b6-9qck2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- exec busybox-58667487b6-wj4jh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- exec busybox-58667487b6-9qck2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- exec busybox-58667487b6-wj4jh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- exec busybox-58667487b6-9qck2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- exec busybox-58667487b6-wj4jh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.61s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- exec busybox-58667487b6-9qck2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- exec busybox-58667487b6-9qck2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- exec busybox-58667487b6-wj4jh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-911102 -- exec busybox-58667487b6-wj4jh -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-911102 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-911102 -v 3 --alsologtostderr: (26.779803045s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-911102 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 cp testdata/cp-test.txt multinode-911102:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 cp multinode-911102:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1805211606/001/cp-test_multinode-911102.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 cp multinode-911102:/home/docker/cp-test.txt multinode-911102-m02:/home/docker/cp-test_multinode-911102_multinode-911102-m02.txt
E0211 02:35:03.309075   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102-m02 "sudo cat /home/docker/cp-test_multinode-911102_multinode-911102-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 cp multinode-911102:/home/docker/cp-test.txt multinode-911102-m03:/home/docker/cp-test_multinode-911102_multinode-911102-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102-m03 "sudo cat /home/docker/cp-test_multinode-911102_multinode-911102-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 cp testdata/cp-test.txt multinode-911102-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 cp multinode-911102-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1805211606/001/cp-test_multinode-911102-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 cp multinode-911102-m02:/home/docker/cp-test.txt multinode-911102:/home/docker/cp-test_multinode-911102-m02_multinode-911102.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102 "sudo cat /home/docker/cp-test_multinode-911102-m02_multinode-911102.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 cp multinode-911102-m02:/home/docker/cp-test.txt multinode-911102-m03:/home/docker/cp-test_multinode-911102-m02_multinode-911102-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102-m03 "sudo cat /home/docker/cp-test_multinode-911102-m02_multinode-911102-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 cp testdata/cp-test.txt multinode-911102-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 cp multinode-911102-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1805211606/001/cp-test_multinode-911102-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 cp multinode-911102-m03:/home/docker/cp-test.txt multinode-911102:/home/docker/cp-test_multinode-911102-m03_multinode-911102.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102 "sudo cat /home/docker/cp-test_multinode-911102-m03_multinode-911102.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 cp multinode-911102-m03:/home/docker/cp-test.txt multinode-911102-m02:/home/docker/cp-test_multinode-911102-m03_multinode-911102-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 ssh -n multinode-911102-m02 "sudo cat /home/docker/cp-test_multinode-911102-m03_multinode-911102-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-911102 node stop m03: (1.17600237s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-911102 status: exit status 7 (454.313297ms)

                                                
                                                
-- stdout --
	multinode-911102
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-911102-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-911102-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-911102 status --alsologtostderr: exit status 7 (465.28361ms)

                                                
                                                
-- stdout --
	multinode-911102
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-911102-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-911102-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:35:12.061906  168283 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:35:12.062165  168283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:35:12.062177  168283 out.go:358] Setting ErrFile to fd 2...
	I0211 02:35:12.062182  168283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:35:12.062446  168283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	I0211 02:35:12.062652  168283 out.go:352] Setting JSON to false
	I0211 02:35:12.062739  168283 mustload.go:65] Loading cluster: multinode-911102
	I0211 02:35:12.062824  168283 notify.go:220] Checking for updates...
	I0211 02:35:12.063216  168283 config.go:182] Loaded profile config "multinode-911102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:35:12.063237  168283 status.go:174] checking status of multinode-911102 ...
	I0211 02:35:12.063659  168283 cli_runner.go:164] Run: docker container inspect multinode-911102 --format={{.State.Status}}
	I0211 02:35:12.083941  168283 status.go:371] multinode-911102 host status = "Running" (err=<nil>)
	I0211 02:35:12.083984  168283 host.go:66] Checking if "multinode-911102" exists ...
	I0211 02:35:12.084307  168283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-911102
	I0211 02:35:12.101530  168283 host.go:66] Checking if "multinode-911102" exists ...
	I0211 02:35:12.101794  168283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0211 02:35:12.101832  168283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-911102
	I0211 02:35:12.120064  168283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/multinode-911102/id_rsa Username:docker}
	I0211 02:35:12.209065  168283 ssh_runner.go:195] Run: systemctl --version
	I0211 02:35:12.212983  168283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 02:35:12.223246  168283 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:35:12.271812  168283 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:64 SystemTime:2025-02-11 02:35:12.26226631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:35:12.272406  168283 kubeconfig.go:125] found "multinode-911102" server: "https://192.168.67.2:8443"
	I0211 02:35:12.272438  168283 api_server.go:166] Checking apiserver status ...
	I0211 02:35:12.272472  168283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 02:35:12.282633  168283 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1525/cgroup
	I0211 02:35:12.291686  168283 api_server.go:182] apiserver freezer: "12:freezer:/docker/63666490e7bad7ea29e1457d157660ae4fb924cf70702753752766574f1d257b/crio/crio-f986330b8e029f60ded5a76f9d547f5b3f336ce9337f7b120453da6d382913bf"
	I0211 02:35:12.291755  168283 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/63666490e7bad7ea29e1457d157660ae4fb924cf70702753752766574f1d257b/crio/crio-f986330b8e029f60ded5a76f9d547f5b3f336ce9337f7b120453da6d382913bf/freezer.state
	I0211 02:35:12.299367  168283 api_server.go:204] freezer state: "THAWED"
	I0211 02:35:12.299400  168283 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0211 02:35:12.304145  168283 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0211 02:35:12.304176  168283 status.go:463] multinode-911102 apiserver status = Running (err=<nil>)
	I0211 02:35:12.304187  168283 status.go:176] multinode-911102 status: &{Name:multinode-911102 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:35:12.304207  168283 status.go:174] checking status of multinode-911102-m02 ...
	I0211 02:35:12.304461  168283 cli_runner.go:164] Run: docker container inspect multinode-911102-m02 --format={{.State.Status}}
	I0211 02:35:12.321974  168283 status.go:371] multinode-911102-m02 host status = "Running" (err=<nil>)
	I0211 02:35:12.322004  168283 host.go:66] Checking if "multinode-911102-m02" exists ...
	I0211 02:35:12.322283  168283 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-911102-m02
	I0211 02:35:12.339703  168283 host.go:66] Checking if "multinode-911102-m02" exists ...
	I0211 02:35:12.340028  168283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0211 02:35:12.340070  168283 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-911102-m02
	I0211 02:35:12.358029  168283 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/20400-12240/.minikube/machines/multinode-911102-m02/id_rsa Username:docker}
	I0211 02:35:12.449373  168283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 02:35:12.460138  168283 status.go:176] multinode-911102-m02 status: &{Name:multinode-911102-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:35:12.460174  168283 status.go:174] checking status of multinode-911102-m03 ...
	I0211 02:35:12.460475  168283 cli_runner.go:164] Run: docker container inspect multinode-911102-m03 --format={{.State.Status}}
	I0211 02:35:12.477785  168283 status.go:371] multinode-911102-m03 host status = "Stopped" (err=<nil>)
	I0211 02:35:12.477809  168283 status.go:384] host is not running, skipping remaining checks
	I0211 02:35:12.477817  168283 status.go:176] multinode-911102-m03 status: &{Name:multinode-911102-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.10s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-911102 node start m03 -v=7 --alsologtostderr: (8.434328619s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (105.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-911102
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-911102
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-911102: (24.65159597s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-911102 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-911102 --wait=true -v=8 --alsologtostderr: (1m20.73089919s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-911102
--- PASS: TestMultiNode/serial/RestartKeepsNodes (105.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-911102 node delete m03: (4.684744943s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-911102 stop: (23.559694181s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-911102 status: exit status 7 (86.219357ms)

                                                
                                                
-- stdout --
	multinode-911102
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-911102-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-911102 status --alsologtostderr: exit status 7 (88.023743ms)

                                                
                                                
-- stdout --
	multinode-911102
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-911102-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:37:35.994331  177955 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:37:35.994464  177955 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:37:35.994475  177955 out.go:358] Setting ErrFile to fd 2...
	I0211 02:37:35.994482  177955 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:37:35.994713  177955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	I0211 02:37:35.994883  177955 out.go:352] Setting JSON to false
	I0211 02:37:35.994911  177955 mustload.go:65] Loading cluster: multinode-911102
	I0211 02:37:35.995032  177955 notify.go:220] Checking for updates...
	I0211 02:37:35.995311  177955 config.go:182] Loaded profile config "multinode-911102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:37:35.995331  177955 status.go:174] checking status of multinode-911102 ...
	I0211 02:37:35.995745  177955 cli_runner.go:164] Run: docker container inspect multinode-911102 --format={{.State.Status}}
	I0211 02:37:36.017256  177955 status.go:371] multinode-911102 host status = "Stopped" (err=<nil>)
	I0211 02:37:36.017280  177955 status.go:384] host is not running, skipping remaining checks
	I0211 02:37:36.017288  177955 status.go:176] multinode-911102 status: &{Name:multinode-911102 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:37:36.017324  177955 status.go:174] checking status of multinode-911102-m02 ...
	I0211 02:37:36.017564  177955 cli_runner.go:164] Run: docker container inspect multinode-911102-m02 --format={{.State.Status}}
	I0211 02:37:36.035166  177955 status.go:371] multinode-911102-m02 host status = "Stopped" (err=<nil>)
	I0211 02:37:36.035190  177955 status.go:384] host is not running, skipping remaining checks
	I0211 02:37:36.035196  177955 status.go:176] multinode-911102-m02 status: &{Name:multinode-911102-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-911102 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-911102 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (44.52588215s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-911102 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.09s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-911102
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-911102-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-911102-m02 --driver=docker  --container-runtime=crio: exit status 14 (68.033694ms)

                                                
                                                
-- stdout --
	* [multinode-911102-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-911102-m02' is duplicated with machine name 'multinode-911102-m02' in profile 'multinode-911102'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-911102-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-911102-m03 --driver=docker  --container-runtime=crio: (24.051407798s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-911102
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-911102: exit status 80 (263.981413ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-911102 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-911102-m03 already exists in multinode-911102-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-911102-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-911102-m03: (1.844556513s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.28s)

                                                
                                    
x
+
TestPreload (103.02s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-301637 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0211 02:38:53.188615   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:40:03.309650   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-301637 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m17.194792491s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-301637 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-301637 image pull gcr.io/k8s-minikube/busybox: (1.298774818s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-301637
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-301637: (5.691098691s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-301637 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-301637 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (16.33712727s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-301637 image list
helpers_test.go:175: Cleaning up "test-preload-301637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-301637
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-301637: (2.285694502s)
--- PASS: TestPreload (103.02s)

                                                
                                    
x
+
TestScheduledStopUnix (96.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-934143 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-934143 --memory=2048 --driver=docker  --container-runtime=crio: (20.078235317s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-934143 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-934143 -n scheduled-stop-934143
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-934143 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0211 02:40:54.795699   19028 retry.go:31] will retry after 126.748µs: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
I0211 02:40:54.796867   19028 retry.go:31] will retry after 196.308µs: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
I0211 02:40:54.798019   19028 retry.go:31] will retry after 316.727µs: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
I0211 02:40:54.799161   19028 retry.go:31] will retry after 367.481µs: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
I0211 02:40:54.800285   19028 retry.go:31] will retry after 507.074µs: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
I0211 02:40:54.801427   19028 retry.go:31] will retry after 707.084µs: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
I0211 02:40:54.802563   19028 retry.go:31] will retry after 1.366031ms: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
I0211 02:40:54.804796   19028 retry.go:31] will retry after 1.896075ms: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
I0211 02:40:54.807018   19028 retry.go:31] will retry after 2.492533ms: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
I0211 02:40:54.810232   19028 retry.go:31] will retry after 2.087168ms: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
I0211 02:40:54.812371   19028 retry.go:31] will retry after 7.06718ms: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
I0211 02:40:54.819521   19028 retry.go:31] will retry after 5.418681ms: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
I0211 02:40:54.825749   19028 retry.go:31] will retry after 16.279922ms: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
I0211 02:40:54.843014   19028 retry.go:31] will retry after 22.81684ms: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
I0211 02:40:54.866252   19028 retry.go:31] will retry after 27.908872ms: open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/scheduled-stop-934143/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-934143 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-934143 -n scheduled-stop-934143
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-934143
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-934143 --schedule 15s
E0211 02:41:26.377081   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0211 02:41:56.256621   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-934143
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-934143: exit status 7 (67.606242ms)

                                                
                                                
-- stdout --
	scheduled-stop-934143
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-934143 -n scheduled-stop-934143
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-934143 -n scheduled-stop-934143: exit status 7 (68.27748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-934143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-934143
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-934143: (4.924939397s)
--- PASS: TestScheduledStopUnix (96.31s)

                                                
                                    
x
+
TestInsufficientStorage (9.82s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-967386 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-967386 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.480834082s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6457a81f-80c1-4192-8965-2e3b7feebcb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-967386] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbf77118-77bf-470e-9536-cdaa1857cc76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20400"}}
	{"specversion":"1.0","id":"ee6c8007-9caa-4ea9-879e-596b9de844ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"80d99f62-4573-4795-bc25-23920fbd49ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig"}}
	{"specversion":"1.0","id":"cced98f6-be4e-4dc5-8a5e-f01cc80857fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube"}}
	{"specversion":"1.0","id":"8c0f85bd-c9f7-42d3-8d69-a89f6bf7e33b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5c6c6e2f-86f6-46de-979b-48e61e920808","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7bf768bb-35bf-4c86-87b9-d9cfe427c11c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c16f5716-ffca-4764-b35d-ae222c08ff02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"fd9980c6-6452-4b5a-9488-9d08f998c01a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ee6af9ae-b567-4316-b346-c61deb63fe3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5e20b267-b03e-4b79-a590-eca1e281ab43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-967386\" primary control-plane node in \"insufficient-storage-967386\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5994445-31d2-4ad2-bdcb-23eeb3aeda89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"178043af-2010-4ed6-a210-217c95537094","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c0d6a460-4b64-40bc-8c88-3c8016ce18f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-967386 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-967386 --output=json --layout=cluster: exit status 7 (255.905287ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-967386","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-967386","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0211 02:42:18.360716  200312 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-967386" does not appear in /home/jenkins/minikube-integration/20400-12240/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-967386 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-967386 --output=json --layout=cluster: exit status 7 (257.00496ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-967386","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-967386","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0211 02:42:18.617868  200411 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-967386" does not appear in /home/jenkins/minikube-integration/20400-12240/kubeconfig
	E0211 02:42:18.627777  200411 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/insufficient-storage-967386/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-967386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-967386
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-967386: (1.820532063s)
--- PASS: TestInsufficientStorage (9.82s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (99.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1382057949 start -p running-upgrade-857359 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1382057949 start -p running-upgrade-857359 --memory=2200 --vm-driver=docker  --container-runtime=crio: (28.180624851s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-857359 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-857359 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m8.753183617s)
helpers_test.go:175: Cleaning up "running-upgrade-857359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-857359
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-857359: (2.537053311s)
--- PASS: TestRunningBinaryUpgrade (99.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (358.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-504968 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-504968 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (55.277421338s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-504968
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-504968: (1.297019205s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-504968 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-504968 status --format={{.Host}}: exit status 7 (82.476889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-504968 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-504968 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.295985338s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-504968 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-504968 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-504968 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (72.915087ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-504968] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-504968
	    minikube start -p kubernetes-upgrade-504968 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5049682 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-504968 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-504968 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-504968 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.484492304s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-504968" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-504968
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-504968: (2.545784159s)
--- PASS: TestKubernetesUpgrade (358.12s)

                                                
                                    
x
+
TestMissingContainerUpgrade (140.07s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.838977489 start -p missing-upgrade-323539 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.838977489 start -p missing-upgrade-323539 --memory=2200 --driver=docker  --container-runtime=crio: (1m10.160059711s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-323539
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-323539: (13.075295983s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-323539
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-323539 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0211 02:43:53.188482   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-323539 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.453157759s)
helpers_test.go:175: Cleaning up "missing-upgrade-323539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-323539
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-323539: (4.715580134s)
--- PASS: TestMissingContainerUpgrade (140.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (94.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2559627935 start -p stopped-upgrade-624566 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2559627935 start -p stopped-upgrade-624566 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m5.242712625s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2559627935 -p stopped-upgrade-624566 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2559627935 -p stopped-upgrade-624566 stop: (3.583060657s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-624566 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-624566 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.150962012s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (94.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-624566
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.80s)

                                                
                                    
x
+
TestPause/serial/Start (47.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-346537 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-346537 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (47.079882217s)
--- PASS: TestPause/serial/Start (47.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-050042 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-050042 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (67.510541ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-050042] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-050042 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-050042 --driver=docker  --container-runtime=crio: (24.815438768s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-050042 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.16s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (24.92s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-346537 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0211 02:45:03.309368   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-346537 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.904161101s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (24.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-050042 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-050042 --no-kubernetes --driver=docker  --container-runtime=crio: (3.700573677s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-050042 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-050042 status -o json: exit status 2 (312.610227ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-050042","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-050042
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-050042: (2.03687918s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-065740 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-065740 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (166.928424ms)

                                                
                                                
-- stdout --
	* [false-065740] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:45:09.859483  237493 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:45:09.859625  237493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:45:09.859637  237493 out.go:358] Setting ErrFile to fd 2...
	I0211 02:45:09.859644  237493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:45:09.859923  237493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12240/.minikube/bin
	I0211 02:45:09.860605  237493 out.go:352] Setting JSON to false
	I0211 02:45:09.861755  237493 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5259,"bootTime":1739236651,"procs":284,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:45:09.861862  237493 start.go:139] virtualization: kvm guest
	I0211 02:45:09.864459  237493 out.go:177] * [false-065740] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 02:45:09.865796  237493 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 02:45:09.865786  237493 notify.go:220] Checking for updates...
	I0211 02:45:09.868445  237493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:45:09.869637  237493 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12240/kubeconfig
	I0211 02:45:09.870830  237493 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12240/.minikube
	I0211 02:45:09.872073  237493 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 02:45:09.873242  237493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 02:45:09.875174  237493 config.go:182] Loaded profile config "NoKubernetes-050042": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0211 02:45:09.875323  237493 config.go:182] Loaded profile config "kubernetes-upgrade-504968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:45:09.875511  237493 config.go:182] Loaded profile config "pause-346537": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:45:09.875626  237493 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:45:09.901565  237493 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0211 02:45:09.901671  237493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0211 02:45:09.953961  237493 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:75 SystemTime:2025-02-11 02:45:09.944030338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0211 02:45:09.954069  237493 docker.go:318] overlay module found
	I0211 02:45:09.955877  237493 out.go:177] * Using the docker driver based on user configuration
	I0211 02:45:09.957223  237493 start.go:297] selected driver: docker
	I0211 02:45:09.957249  237493 start.go:901] validating driver "docker" against <nil>
	I0211 02:45:09.957264  237493 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 02:45:09.959680  237493 out.go:201] 
	W0211 02:45:09.961147  237493 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0211 02:45:09.962429  237493 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-065740 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-065740

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-065740

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-065740

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-065740

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-065740

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-065740

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-065740

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-065740

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-065740

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-065740

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-065740

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-065740" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-065740" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Feb 2025 02:43:28 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-504968
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Feb 2025 02:44:55 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-346537
contexts:
- context:
cluster: kubernetes-upgrade-504968
user: kubernetes-upgrade-504968
name: kubernetes-upgrade-504968
- context:
cluster: pause-346537
extensions:
- extension:
last-update: Tue, 11 Feb 2025 02:44:55 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-346537
name: pause-346537
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-504968
user:
client-certificate: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/kubernetes-upgrade-504968/client.crt
client-key: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/kubernetes-upgrade-504968/client.key
- name: pause-346537
user:
client-certificate: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/pause-346537/client.crt
client-key: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/pause-346537/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-065740

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065740"

                                                
                                                
----------------------- debugLogs end: false-065740 [took: 3.429656623s] --------------------------------
helpers_test.go:175: Cleaning up "false-065740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-065740
--- PASS: TestNetworkPlugins/group/false (3.79s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-346537 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-346537 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-346537 --output=json --layout=cluster: exit status 2 (339.47589ms)

                                                
                                                
-- stdout --
	{"Name":"pause-346537","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-346537","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-346537 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-050042 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-050042 --no-kubernetes --driver=docker  --container-runtime=crio: (7.686517343s)
--- PASS: TestNoKubernetes/serial/Start (7.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-346537 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.78s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-346537 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-346537 --alsologtostderr -v=5: (2.783332737s)
--- PASS: TestPause/serial/DeletePaused (2.78s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.75s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.686382573s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-346537
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-346537: exit status 1 (20.508025ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-346537: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-050042 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-050042 "sudo systemctl is-active --quiet service kubelet": exit status 1 (251.705747ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (19.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.671024901s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (5.149493586s)
--- PASS: TestNoKubernetes/serial/ProfileList (19.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-050042
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-050042: (1.230667373s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-050042 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-050042 --driver=docker  --container-runtime=crio: (8.902239679s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-050042 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-050042 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.765607ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (142.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-817513 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-817513 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m22.231860748s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (142.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-496892 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-496892 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (54.155024153s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-496892 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [507909e2-0af0-4987-8b61-ac0899bd6ec9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [507909e2-0af0-4987-8b61-ac0899bd6ec9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003282917s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-496892 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-496892 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-496892 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-496892 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-496892 --alsologtostderr -v=3: (11.865072158s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-496892 -n no-preload-496892
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-496892 -n no-preload-496892: exit status 7 (74.79548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-496892 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (285.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-496892 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-496892 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m45.117750886s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-496892 -n no-preload-496892
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (285.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-082831 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-082831 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (40.343161476s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-817513 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a22ef182-7dff-4206-9cbc-293f31857529] Pending
helpers_test.go:344: "busybox" [a22ef182-7dff-4206-9cbc-293f31857529] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a22ef182-7dff-4206-9cbc-293f31857529] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003224723s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-817513 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-817513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-817513 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-817513 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-817513 --alsologtostderr -v=3: (11.973866447s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-817513 -n old-k8s-version-817513
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-817513 -n old-k8s-version-817513: exit status 7 (70.334215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-817513 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (130.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-817513 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0211 02:48:53.188256   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-817513 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m10.062877923s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-817513 -n old-k8s-version-817513
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (130.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-082831 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e0c5b6d1-12f7-4a12-91a3-a84d3b0fb5bc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e0c5b6d1-12f7-4a12-91a3-a84d3b0fb5bc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00404666s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-082831 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-082831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-082831 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-082831 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-082831 --alsologtostderr -v=3: (12.051123029s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-082831 -n embed-certs-082831
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-082831 -n embed-certs-082831: exit status 7 (73.572297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-082831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-082831 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-082831 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m23.274380106s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-082831 -n embed-certs-082831
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-289377 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
E0211 02:50:03.309734   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-289377 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (43.259238505s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-289377 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0cc33b7a-b4db-4acc-961a-e40c7fe308e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0cc33b7a-b4db-4acc-961a-e40c7fe308e1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003864966s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-289377 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-289377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-289377 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-289377 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-289377 --alsologtostderr -v=3: (11.900425311s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-289377 -n default-k8s-diff-port-289377
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-289377 -n default-k8s-diff-port-289377: exit status 7 (69.156946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-289377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (270.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-289377 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-289377 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (4m30.642455368s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-289377 -n default-k8s-diff-port-289377
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (270.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-66pdg" [06f1508b-372e-44d4-8c12-57b34d5eeaed] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00306234s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-66pdg" [06f1508b-372e-44d4-8c12-57b34d5eeaed] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003286148s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-817513 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-817513 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-817513 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-817513 -n old-k8s-version-817513
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-817513 -n old-k8s-version-817513: exit status 2 (290.727221ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-817513 -n old-k8s-version-817513
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-817513 -n old-k8s-version-817513: exit status 2 (294.204502ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-817513 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-817513 -n old-k8s-version-817513
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-817513 -n old-k8s-version-817513
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-077456 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-077456 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (26.631069072s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-077456 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-077456 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.075323147s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-077456 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-077456 --alsologtostderr -v=3: (1.209821103s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-077456 -n newest-cni-077456
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-077456 -n newest-cni-077456: exit status 7 (66.916968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-077456 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-077456 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-077456 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.1: (12.576160422s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-077456 -n newest-cni-077456
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-077456 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-077456 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-077456 --alsologtostderr -v=1: (1.109050419s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-077456 -n newest-cni-077456
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-077456 -n newest-cni-077456: exit status 2 (302.821495ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-077456 -n newest-cni-077456
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-077456 -n newest-cni-077456: exit status 2 (291.273112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-077456 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-077456 -n newest-cni-077456
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-077456 -n newest-cni-077456
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-065740 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-065740 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.074758308s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-9mlxt" [9b78db3c-9b1d-4fc6-bae6-e2ccf1accb73] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003707493s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-9mlxt" [9b78db3c-9b1d-4fc6-bae6-e2ccf1accb73] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003955078s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-496892 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-496892 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-496892 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-496892 -n no-preload-496892
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-496892 -n no-preload-496892: exit status 2 (291.536349ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-496892 -n no-preload-496892
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-496892 -n no-preload-496892: exit status 2 (296.362985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-496892 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-496892 -n no-preload-496892
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-496892 -n no-preload-496892
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-065740 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-065740 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (46.486512395s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-065740 "pgrep -a kubelet"
I0211 02:52:41.520276   19028 config.go:182] Loaded profile config "auto-065740": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-065740 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-tbwgg" [f1beb7d6-508a-4f61-8e89-fff93e18bb06] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-tbwgg" [f1beb7d6-508a-4f61-8e89-fff93e18bb06] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004267818s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-065740 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-065740 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-065740 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (36.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-065740 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0211 02:53:20.720452   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:53:20.726915   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:53:20.738266   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:53:20.759657   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:53:20.801080   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:53:20.882527   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:53:21.044085   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:53:21.365822   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:53:22.008022   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:53:23.289572   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:53:25.851484   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-065740 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (36.581652547s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (36.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-928gk" [1a633112-6fd6-4903-a695-d46f02a5f584] Running
E0211 02:53:30.973810   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004122385s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-065740 "pgrep -a kubelet"
I0211 02:53:33.251485   19028 config.go:182] Loaded profile config "flannel-065740": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-065740 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-csvk7" [8cd521e5-a554-4e41-861a-95863683ff56] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-csvk7" [8cd521e5-a554-4e41-861a-95863683ff56] Running
E0211 02:53:41.216241   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004252251s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-065740 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-065740 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-065740 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jpl9n" [59b98e5b-45d6-4483-91d5-eb7569d74d37] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003895555s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-065740 "pgrep -a kubelet"
I0211 02:53:49.606177   19028 config.go:182] Loaded profile config "enable-default-cni-065740": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-065740 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bxjlg" [31188cee-feae-4192-ba1b-2f4ce8906551] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-bxjlg" [31188cee-feae-4192-ba1b-2f4ce8906551] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004097363s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jpl9n" [59b98e5b-45d6-4483-91d5-eb7569d74d37] Running
E0211 02:53:53.188049   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/addons-652362/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003621639s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-082831 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-082831 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-082831 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-082831 -n embed-certs-082831
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-082831 -n embed-certs-082831: exit status 2 (307.226796ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-082831 -n embed-certs-082831
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-082831 -n embed-certs-082831: exit status 2 (320.35218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-082831 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-082831 -n embed-certs-082831
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-082831 -n embed-certs-082831
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-065740 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-065740 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-065740 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-065740 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0211 02:54:01.697737   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-065740 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.434033555s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (44.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-065740 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0211 02:54:42.659786   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/old-k8s-version-817513/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:55:03.309509   19028 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/functional-149709/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-065740 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (44.952542146s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (44.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-sxtpv" [8f343005-45b4-42c6-afd7-80e75f06f4aa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004294583s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wlgsp" [2616aba5-2298-467e-bbc1-cbc8bfef68be] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003800907s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-065740 "pgrep -a kubelet"
I0211 02:55:08.596401   19028 config.go:182] Loaded profile config "bridge-065740": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-065740 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bn7cm" [f3bed4b9-d773-45ce-8570-0484d4657628] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-bn7cm" [f3bed4b9-d773-45ce-8570-0484d4657628] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003031449s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-065740 "pgrep -a kubelet"
I0211 02:55:10.236284   19028 config.go:182] Loaded profile config "kindnet-065740": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-065740 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-tjl27" [2aba68e5-4b41-4516-98af-7e50e7dc56c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-tjl27" [2aba68e5-4b41-4516-98af-7e50e7dc56c9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.00522s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wlgsp" [2616aba5-2298-467e-bbc1-cbc8bfef68be] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00423035s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-289377 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-289377 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-289377 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-289377 -n default-k8s-diff-port-289377
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-289377 -n default-k8s-diff-port-289377: exit status 2 (291.944903ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-289377 -n default-k8s-diff-port-289377
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-289377 -n default-k8s-diff-port-289377: exit status 2 (293.144526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-289377 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-289377 -n default-k8s-diff-port-289377
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-289377 -n default-k8s-diff-port-289377
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-065740 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-065740 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-065740 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-065740 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-065740 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-065740 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (47.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-065740 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-065740 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (47.188800799s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (47.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-065740 "pgrep -a kubelet"
I0211 02:56:09.172360   19028 config.go:182] Loaded profile config "custom-flannel-065740": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-065740 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-kf2n8" [828c6188-0364-479a-9912-7056d515fd46] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-kf2n8" [828c6188-0364-479a-9912-7056d515fd46] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003368217s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-065740 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-065740 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-065740 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    

Test skip (27/324)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-652362 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-081956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-081956
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-065740 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-065740

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-065740

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-065740

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-065740

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-065740

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-065740

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-065740

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-065740

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-065740

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-065740

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-065740

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-065740" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-065740" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Feb 2025 02:45:04 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-050042
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Feb 2025 02:43:28 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-504968
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Feb 2025 02:44:55 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-346537
contexts:
- context:
cluster: NoKubernetes-050042
extensions:
- extension:
last-update: Tue, 11 Feb 2025 02:45:04 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-050042
name: NoKubernetes-050042
- context:
cluster: kubernetes-upgrade-504968
user: kubernetes-upgrade-504968
name: kubernetes-upgrade-504968
- context:
cluster: pause-346537
extensions:
- extension:
last-update: Tue, 11 Feb 2025 02:44:55 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-346537
name: pause-346537
current-context: NoKubernetes-050042
kind: Config
preferences: {}
users:
- name: NoKubernetes-050042
user:
client-certificate: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/NoKubernetes-050042/client.crt
client-key: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/NoKubernetes-050042/client.key
- name: kubernetes-upgrade-504968
user:
client-certificate: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/kubernetes-upgrade-504968/client.crt
client-key: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/kubernetes-upgrade-504968/client.key
- name: pause-346537
user:
client-certificate: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/pause-346537/client.crt
client-key: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/pause-346537/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-065740

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065740"

                                                
                                                
----------------------- debugLogs end: kubenet-065740 [took: 3.024181608s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-065740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-065740
--- SKIP: TestNetworkPlugins/group/kubenet (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-065740 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-065740" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Feb 2025 02:43:28 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-504968
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20400-12240/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Feb 2025 02:44:55 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-346537
contexts:
- context:
cluster: kubernetes-upgrade-504968
user: kubernetes-upgrade-504968
name: kubernetes-upgrade-504968
- context:
cluster: pause-346537
extensions:
- extension:
last-update: Tue, 11 Feb 2025 02:44:55 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: pause-346537
name: pause-346537
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-504968
user:
client-certificate: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/kubernetes-upgrade-504968/client.crt
client-key: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/kubernetes-upgrade-504968/client.key
- name: pause-346537
user:
client-certificate: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/pause-346537/client.crt
client-key: /home/jenkins/minikube-integration/20400-12240/.minikube/profiles/pause-346537/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-065740

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-065740" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065740"

                                                
                                                
----------------------- debugLogs end: cilium-065740 [took: 3.387901988s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-065740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-065740
--- SKIP: TestNetworkPlugins/group/cilium (3.54s)

                                                
                                    
Copied to clipboard