Test Report: Docker_Linux_crio 20062

                    
                      964562641276d457941dbb6d7cf4aa7e43312d02:2024-12-10:37415
                    
                

Test fail (3/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 151.73
38 TestAddons/parallel/MetricsServer 366.66
103 TestFunctional/parallel/MySQL 602.8
x
+
TestAddons/parallel/Ingress (151.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-701527 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-701527 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-701527 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [27d4faaf-8b50-41e6-8db9-fd2e0948d34f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [27d4faaf-8b50-41e6-8db9-fd2e0948d34f] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003985679s
I1209 23:47:28.335559   15396 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-701527 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.508950549s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-701527 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-701527
helpers_test.go:235: (dbg) docker inspect addons-701527:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "845a2978a2b56775d174983f7156b47f75cbd517d111b717fa683318c531e629",
	        "Created": "2024-12-09T23:44:06.447018212Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 17464,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-09T23:44:06.580880962Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/845a2978a2b56775d174983f7156b47f75cbd517d111b717fa683318c531e629/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/845a2978a2b56775d174983f7156b47f75cbd517d111b717fa683318c531e629/hostname",
	        "HostsPath": "/var/lib/docker/containers/845a2978a2b56775d174983f7156b47f75cbd517d111b717fa683318c531e629/hosts",
	        "LogPath": "/var/lib/docker/containers/845a2978a2b56775d174983f7156b47f75cbd517d111b717fa683318c531e629/845a2978a2b56775d174983f7156b47f75cbd517d111b717fa683318c531e629-json.log",
	        "Name": "/addons-701527",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-701527:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-701527",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/feeefb46f860108fd8c940e50fcada0bdf5c7b66d681bb2fa312b75ddd6ae3e8-init/diff:/var/lib/docker/overlay2/ab6cf1b3d2a8cc4179735a54668a5a4ec060988eb25398d5edaaa8c4eb9fdd94/diff",
	                "MergedDir": "/var/lib/docker/overlay2/feeefb46f860108fd8c940e50fcada0bdf5c7b66d681bb2fa312b75ddd6ae3e8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/feeefb46f860108fd8c940e50fcada0bdf5c7b66d681bb2fa312b75ddd6ae3e8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/feeefb46f860108fd8c940e50fcada0bdf5c7b66d681bb2fa312b75ddd6ae3e8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-701527",
	                "Source": "/var/lib/docker/volumes/addons-701527/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-701527",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-701527",
	                "name.minikube.sigs.k8s.io": "addons-701527",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6fdb3567838e25a0e40a29c05350222b1cd03ced5a0f9bbbc3dc4c2a2f27bdcf",
	            "SandboxKey": "/var/run/docker/netns/6fdb3567838e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-701527": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f7d7d2fb753c6c47c2c2a4eaa0e0c3f27dba879f8e03828a5e109b66b1f60920",
	                    "EndpointID": "3313ac333154670091e4647b944cdb1464dd19e65f492f5325eb1e688e2b98b8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-701527",
	                        "845a2978a2b5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-701527 -n addons-701527
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-701527 logs -n 25: (1.198964175s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-694743                                                                     | download-only-694743   | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| start   | --download-only -p                                                                          | download-docker-926270 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | download-docker-926270                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-926270                                                                   | download-docker-926270 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-759052   | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | binary-mirror-759052                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37197                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-759052                                                                     | binary-mirror-759052   | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| addons  | enable dashboard -p                                                                         | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | addons-701527                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | addons-701527                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-701527 --wait=true                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:46 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | -p addons-701527                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-701527 addons                                                                        | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-701527 ip                                                                            | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-701527 ssh cat                                                                       | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | /opt/local-path-provisioner/pvc-d348a07d-27a8-404f-adfc-4e8b72e76d0a_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:47 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:47 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-701527 addons                                                                        | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-701527 addons                                                                        | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-701527 addons                                                                        | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-701527 addons                                                                        | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-701527 ssh curl -s                                                                   | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-701527 ip                                                                            | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:49 UTC | 09 Dec 24 23:49 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:43:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:43:42.291350   16706 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:43:42.291956   16706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:42.292008   16706 out.go:358] Setting ErrFile to fd 2...
	I1209 23:43:42.292026   16706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:42.292469   16706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
	I1209 23:43:42.293591   16706 out.go:352] Setting JSON to false
	I1209 23:43:42.294440   16706 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1569,"bootTime":1733786253,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:43:42.294553   16706 start.go:139] virtualization: kvm guest
	I1209 23:43:42.296682   16706 out.go:177] * [addons-701527] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:43:42.298147   16706 notify.go:220] Checking for updates...
	I1209 23:43:42.298178   16706 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 23:43:42.299801   16706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:43:42.301367   16706 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig
	I1209 23:43:42.302924   16706 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube
	I1209 23:43:42.304469   16706 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:43:42.305871   16706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:43:42.307386   16706 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:43:42.331603   16706 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:43:42.331700   16706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:43:42.377573   16706 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-09 23:43:42.368637796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:43:42.377679   16706 docker.go:318] overlay module found
	I1209 23:43:42.379787   16706 out.go:177] * Using the docker driver based on user configuration
	I1209 23:43:42.381419   16706 start.go:297] selected driver: docker
	I1209 23:43:42.381437   16706 start.go:901] validating driver "docker" against <nil>
	I1209 23:43:42.381450   16706 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:43:42.382222   16706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:43:42.429393   16706 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-09 23:43:42.42124561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:43:42.429563   16706 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:43:42.429799   16706 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:43:42.431796   16706 out.go:177] * Using Docker driver with root privileges
	I1209 23:43:42.433203   16706 cni.go:84] Creating CNI manager for ""
	I1209 23:43:42.433274   16706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:43:42.433300   16706 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 23:43:42.433381   16706 start.go:340] cluster config:
	{Name:addons-701527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-701527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:43:42.434804   16706 out.go:177] * Starting "addons-701527" primary control-plane node in "addons-701527" cluster
	I1209 23:43:42.436125   16706 cache.go:121] Beginning downloading kic base image for docker with crio
	I1209 23:43:42.437550   16706 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1209 23:43:42.438812   16706 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:43:42.438837   16706 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 23:43:42.438849   16706 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-8617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 23:43:42.438856   16706 cache.go:56] Caching tarball of preloaded images
	I1209 23:43:42.438923   16706 preload.go:172] Found /home/jenkins/minikube-integration/20062-8617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:43:42.438935   16706 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 23:43:42.439279   16706 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/config.json ...
	I1209 23:43:42.439309   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/config.json: {Name:mkddb15bcf662292992308fcda9e5afee384d781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:43:42.454544   16706 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 23:43:42.454662   16706 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1209 23:43:42.454679   16706 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1209 23:43:42.454683   16706 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1209 23:43:42.454690   16706 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1209 23:43:42.454698   16706 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from local cache
	I1209 23:43:54.336857   16706 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from cached tarball
	I1209 23:43:54.336900   16706 cache.go:194] Successfully downloaded all kic artifacts
	I1209 23:43:54.336954   16706 start.go:360] acquireMachinesLock for addons-701527: {Name:mk1a37956add636236f0f7623a5fab0561619f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:43:54.337068   16706 start.go:364] duration metric: took 88.265µs to acquireMachinesLock for "addons-701527"
	I1209 23:43:54.337096   16706 start.go:93] Provisioning new machine with config: &{Name:addons-701527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-701527 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:43:54.337198   16706 start.go:125] createHost starting for "" (driver="docker")
	I1209 23:43:54.339145   16706 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1209 23:43:54.339398   16706 start.go:159] libmachine.API.Create for "addons-701527" (driver="docker")
	I1209 23:43:54.339428   16706 client.go:168] LocalClient.Create starting
	I1209 23:43:54.339520   16706 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca.pem
	I1209 23:43:54.467011   16706 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/cert.pem
	I1209 23:43:54.667107   16706 cli_runner.go:164] Run: docker network inspect addons-701527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 23:43:54.683385   16706 cli_runner.go:211] docker network inspect addons-701527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 23:43:54.683462   16706 network_create.go:284] running [docker network inspect addons-701527] to gather additional debugging logs...
	I1209 23:43:54.683484   16706 cli_runner.go:164] Run: docker network inspect addons-701527
	W1209 23:43:54.699365   16706 cli_runner.go:211] docker network inspect addons-701527 returned with exit code 1
	I1209 23:43:54.699392   16706 network_create.go:287] error running [docker network inspect addons-701527]: docker network inspect addons-701527: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-701527 not found
	I1209 23:43:54.699403   16706 network_create.go:289] output of [docker network inspect addons-701527]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-701527 not found
	
	** /stderr **
	I1209 23:43:54.699516   16706 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 23:43:54.715467   16706 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c255f0}
	I1209 23:43:54.715522   16706 network_create.go:124] attempt to create docker network addons-701527 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1209 23:43:54.715581   16706 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-701527 addons-701527
	I1209 23:43:54.775951   16706 network_create.go:108] docker network addons-701527 192.168.49.0/24 created
	I1209 23:43:54.775984   16706 kic.go:121] calculated static IP "192.168.49.2" for the "addons-701527" container
	I1209 23:43:54.776060   16706 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 23:43:54.791352   16706 cli_runner.go:164] Run: docker volume create addons-701527 --label name.minikube.sigs.k8s.io=addons-701527 --label created_by.minikube.sigs.k8s.io=true
	I1209 23:43:54.807968   16706 oci.go:103] Successfully created a docker volume addons-701527
	I1209 23:43:54.808051   16706 cli_runner.go:164] Run: docker run --rm --name addons-701527-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-701527 --entrypoint /usr/bin/test -v addons-701527:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1209 23:44:01.918760   16706 cli_runner.go:217] Completed: docker run --rm --name addons-701527-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-701527 --entrypoint /usr/bin/test -v addons-701527:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib: (7.110650813s)
	I1209 23:44:01.918791   16706 oci.go:107] Successfully prepared a docker volume addons-701527
	I1209 23:44:01.918821   16706 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:44:01.918850   16706 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 23:44:01.918933   16706 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20062-8617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-701527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 23:44:06.385319   16706 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20062-8617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-701527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.466339119s)
	I1209 23:44:06.385349   16706 kic.go:203] duration metric: took 4.46649867s to extract preloaded images to volume ...
	W1209 23:44:06.385464   16706 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 23:44:06.385549   16706 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 23:44:06.431904   16706 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-701527 --name addons-701527 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-701527 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-701527 --network addons-701527 --ip 192.168.49.2 --volume addons-701527:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1209 23:44:06.756203   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Running}}
	I1209 23:44:06.774385   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:06.792768   16706 cli_runner.go:164] Run: docker exec addons-701527 stat /var/lib/dpkg/alternatives/iptables
	I1209 23:44:06.833331   16706 oci.go:144] the created container "addons-701527" has a running status.
	I1209 23:44:06.833363   16706 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa...
	I1209 23:44:07.065248   16706 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 23:44:07.089698   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:07.117248   16706 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 23:44:07.117277   16706 kic_runner.go:114] Args: [docker exec --privileged addons-701527 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 23:44:07.195128   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:07.220606   16706 machine.go:93] provisionDockerMachine start ...
	I1209 23:44:07.220697   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:07.241994   16706 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:07.242270   16706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 23:44:07.242288   16706 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:44:07.398775   16706 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-701527
	
	I1209 23:44:07.398799   16706 ubuntu.go:169] provisioning hostname "addons-701527"
	I1209 23:44:07.398843   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:07.416433   16706 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:07.416638   16706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 23:44:07.416662   16706 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-701527 && echo "addons-701527" | sudo tee /etc/hostname
	I1209 23:44:07.553918   16706 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-701527
	
	I1209 23:44:07.554010   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:07.570357   16706 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:07.570601   16706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 23:44:07.570623   16706 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-701527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-701527/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-701527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:44:07.695404   16706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:44:07.695437   16706 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20062-8617/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-8617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-8617/.minikube}
	I1209 23:44:07.695463   16706 ubuntu.go:177] setting up certificates
	I1209 23:44:07.695472   16706 provision.go:84] configureAuth start
	I1209 23:44:07.695536   16706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-701527
	I1209 23:44:07.712526   16706 provision.go:143] copyHostCerts
	I1209 23:44:07.712591   16706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-8617/.minikube/ca.pem (1078 bytes)
	I1209 23:44:07.712697   16706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-8617/.minikube/cert.pem (1123 bytes)
	I1209 23:44:07.712756   16706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-8617/.minikube/key.pem (1675 bytes)
	I1209 23:44:07.712812   16706 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-8617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca-key.pem org=jenkins.addons-701527 san=[127.0.0.1 192.168.49.2 addons-701527 localhost minikube]
	I1209 23:44:07.802490   16706 provision.go:177] copyRemoteCerts
	I1209 23:44:07.802546   16706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:44:07.802585   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:07.819267   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:07.911951   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 23:44:07.934493   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 23:44:07.956309   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:44:07.977414   16706 provision.go:87] duration metric: took 281.931064ms to configureAuth
	I1209 23:44:07.977443   16706 ubuntu.go:193] setting minikube options for container-runtime
	I1209 23:44:07.977599   16706 config.go:182] Loaded profile config "addons-701527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:44:07.977692   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:07.994768   16706 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:07.994958   16706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 23:44:07.994983   16706 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:44:08.206091   16706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:44:08.206126   16706 machine.go:96] duration metric: took 985.497185ms to provisionDockerMachine
	I1209 23:44:08.206142   16706 client.go:171] duration metric: took 13.866707081s to LocalClient.Create
	I1209 23:44:08.206160   16706 start.go:167] duration metric: took 13.866761679s to libmachine.API.Create "addons-701527"
	I1209 23:44:08.206171   16706 start.go:293] postStartSetup for "addons-701527" (driver="docker")
	I1209 23:44:08.206191   16706 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:44:08.206267   16706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:44:08.206320   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:08.223150   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:08.316251   16706 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:44:08.319094   16706 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 23:44:08.319161   16706 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1209 23:44:08.319184   16706 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1209 23:44:08.319196   16706 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1209 23:44:08.319213   16706 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-8617/.minikube/addons for local assets ...
	I1209 23:44:08.319282   16706 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-8617/.minikube/files for local assets ...
	I1209 23:44:08.319315   16706 start.go:296] duration metric: took 113.130951ms for postStartSetup
	I1209 23:44:08.319642   16706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-701527
	I1209 23:44:08.336055   16706 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/config.json ...
	I1209 23:44:08.336288   16706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:44:08.336329   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:08.353745   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:08.440184   16706 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 23:44:08.444201   16706 start.go:128] duration metric: took 14.106986411s to createHost
	I1209 23:44:08.444225   16706 start.go:83] releasing machines lock for "addons-701527", held for 14.10714519s
	I1209 23:44:08.444293   16706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-701527
	I1209 23:44:08.461280   16706 ssh_runner.go:195] Run: cat /version.json
	I1209 23:44:08.461334   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:08.461384   16706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:44:08.461439   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:08.478183   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:08.478399   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:08.642670   16706 ssh_runner.go:195] Run: systemctl --version
	I1209 23:44:08.646692   16706 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:44:08.783694   16706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 23:44:08.787908   16706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:44:08.805730   16706 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1209 23:44:08.805805   16706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:44:08.831361   16706 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1209 23:44:08.831387   16706 start.go:495] detecting cgroup driver to use...
	I1209 23:44:08.831421   16706 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 23:44:08.831457   16706 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:44:08.844432   16706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:44:08.855060   16706 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:44:08.855122   16706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:44:08.867546   16706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:44:08.880740   16706 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:44:08.956437   16706 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:44:09.036536   16706 docker.go:233] disabling docker service ...
	I1209 23:44:09.036590   16706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:44:09.053701   16706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:44:09.065051   16706 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:44:09.140516   16706 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:44:09.216805   16706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:44:09.226910   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:44:09.240762   16706 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:44:09.240813   16706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:09.249462   16706 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:44:09.249528   16706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:09.258292   16706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:09.267398   16706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:09.276327   16706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:44:09.284771   16706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:09.293582   16706 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:09.307349   16706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:09.315961   16706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:44:09.323071   16706 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:44:09.323117   16706 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:44:09.335342   16706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:44:09.342933   16706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:44:09.415449   16706 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:44:09.524846   16706 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:44:09.524919   16706 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:44:09.528108   16706 start.go:563] Will wait 60s for crictl version
	I1209 23:44:09.528157   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:44:09.531039   16706 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:44:09.561465   16706 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1209 23:44:09.561562   16706 ssh_runner.go:195] Run: crio --version
	I1209 23:44:09.594802   16706 ssh_runner.go:195] Run: crio --version
	I1209 23:44:09.627234   16706 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1209 23:44:09.628420   16706 cli_runner.go:164] Run: docker network inspect addons-701527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 23:44:09.644446   16706 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 23:44:09.648107   16706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:44:09.658605   16706 kubeadm.go:883] updating cluster {Name:addons-701527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-701527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:44:09.658726   16706 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:44:09.658768   16706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:44:09.723815   16706 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:44:09.723839   16706 crio.go:433] Images already preloaded, skipping extraction
	I1209 23:44:09.723878   16706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:44:09.754490   16706 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:44:09.754512   16706 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:44:09.754519   16706 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1209 23:44:09.754599   16706 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-701527 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-701527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:44:09.754661   16706 ssh_runner.go:195] Run: crio config
	I1209 23:44:09.795070   16706 cni.go:84] Creating CNI manager for ""
	I1209 23:44:09.795095   16706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:44:09.795104   16706 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:44:09.795125   16706 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-701527 NodeName:addons-701527 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:44:09.795254   16706 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-701527"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:44:09.795311   16706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:44:09.803323   16706 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:44:09.803383   16706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:44:09.811006   16706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1209 23:44:09.826423   16706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:44:09.841975   16706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1209 23:44:09.857568   16706 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 23:44:09.860622   16706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:44:09.870028   16706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:44:09.939730   16706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:44:09.951905   16706 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527 for IP: 192.168.49.2
	I1209 23:44:09.951929   16706 certs.go:194] generating shared ca certs ...
	I1209 23:44:09.951949   16706 certs.go:226] acquiring lock for ca certs: {Name:mk82a507a4733e86b5bb8ab9261ee4fbeee6dad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.952077   16706 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-8617/.minikube/ca.key
	I1209 23:44:10.098168   16706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-8617/.minikube/ca.crt ...
	I1209 23:44:10.098198   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/ca.crt: {Name:mke990eb271b135ccbc977c996229a252283baa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.098361   16706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-8617/.minikube/ca.key ...
	I1209 23:44:10.098372   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/ca.key: {Name:mk2b7bd59246893c57dc576601c7811abd9e7298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.098444   16706 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-8617/.minikube/proxy-client-ca.key
	I1209 23:44:10.299597   16706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-8617/.minikube/proxy-client-ca.crt ...
	I1209 23:44:10.299629   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/proxy-client-ca.crt: {Name:mk840e279a932868b17a95fa509aae91d8562222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.299821   16706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-8617/.minikube/proxy-client-ca.key ...
	I1209 23:44:10.299835   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/proxy-client-ca.key: {Name:mkc1c6b221c0aba3ef91583fc143126cb64e403f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.299913   16706 certs.go:256] generating profile certs ...
	I1209 23:44:10.299966   16706 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.key
	I1209 23:44:10.299980   16706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt with IP's: []
	I1209 23:44:10.373492   16706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt ...
	I1209 23:44:10.373524   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: {Name:mkbf84e05e635237af5578f3a666a71ae1df54ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.373692   16706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.key ...
	I1209 23:44:10.373704   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.key: {Name:mk449bd5cb74f087931c45dc3ca19e3248feb0e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.373771   16706 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.key.c3458f7b
	I1209 23:44:10.373789   16706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.crt.c3458f7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1209 23:44:10.542382   16706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.crt.c3458f7b ...
	I1209 23:44:10.542412   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.crt.c3458f7b: {Name:mk626b1fdb96268d49d7851ab3383da13099eb4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.542576   16706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.key.c3458f7b ...
	I1209 23:44:10.542589   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.key.c3458f7b: {Name:mkdd58f89195a45e477c101614f0b69c1c04a23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.542661   16706 certs.go:381] copying /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.crt.c3458f7b -> /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.crt
	I1209 23:44:10.542731   16706 certs.go:385] copying /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.key.c3458f7b -> /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.key
	I1209 23:44:10.542776   16706 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.key
	I1209 23:44:10.542792   16706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.crt with IP's: []
	I1209 23:44:10.687571   16706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.crt ...
	I1209 23:44:10.687597   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.crt: {Name:mkddbc6b6debbd6d4d4a91713847e0fa81cfb165 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.687735   16706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.key ...
	I1209 23:44:10.687748   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.key: {Name:mkf5097941462bfd427f62a91226f3f67a6de3d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.687898   16706 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 23:44:10.687929   16706 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca.pem (1078 bytes)
	I1209 23:44:10.687954   16706 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:44:10.687978   16706 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/key.pem (1675 bytes)
	I1209 23:44:10.688663   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:44:10.712437   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 23:44:10.733714   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:44:10.754989   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 23:44:10.776534   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 23:44:10.796685   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 23:44:10.817486   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:44:10.837489   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:44:10.857830   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:44:10.878560   16706 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:44:10.893645   16706 ssh_runner.go:195] Run: openssl version
	I1209 23:44:10.898576   16706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:44:10.907087   16706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:44:10.910619   16706 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:44:10.910665   16706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:44:10.916997   16706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:44:10.925412   16706 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:44:10.928356   16706 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 23:44:10.928400   16706 kubeadm.go:392] StartCluster: {Name:addons-701527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-701527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:44:10.928483   16706 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:44:10.928537   16706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:44:10.960284   16706 cri.go:89] found id: ""
	I1209 23:44:10.960343   16706 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:44:10.968161   16706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:44:10.976336   16706 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1209 23:44:10.976399   16706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:44:10.983822   16706 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:44:10.983839   16706 kubeadm.go:157] found existing configuration files:
	
	I1209 23:44:10.983884   16706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:44:10.991209   16706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:44:10.991283   16706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:44:10.998501   16706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:44:11.005865   16706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:44:11.005914   16706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:44:11.013124   16706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:44:11.020638   16706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:44:11.020685   16706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:44:11.027697   16706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:44:11.034905   16706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:44:11.034955   16706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:44:11.042069   16706 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 23:44:11.076185   16706 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 23:44:11.076255   16706 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 23:44:11.091672   16706 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1209 23:44:11.091735   16706 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1209 23:44:11.091763   16706 kubeadm.go:310] OS: Linux
	I1209 23:44:11.091810   16706 kubeadm.go:310] CGROUPS_CPU: enabled
	I1209 23:44:11.091882   16706 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1209 23:44:11.091935   16706 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1209 23:44:11.091993   16706 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1209 23:44:11.092038   16706 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1209 23:44:11.092113   16706 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1209 23:44:11.092157   16706 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1209 23:44:11.092199   16706 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1209 23:44:11.092253   16706 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1209 23:44:11.138467   16706 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 23:44:11.138611   16706 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 23:44:11.138760   16706 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 23:44:11.145223   16706 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 23:44:11.147980   16706 out.go:235]   - Generating certificates and keys ...
	I1209 23:44:11.148085   16706 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 23:44:11.148151   16706 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 23:44:11.358536   16706 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 23:44:11.443796   16706 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 23:44:11.690035   16706 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 23:44:11.953136   16706 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 23:44:12.261494   16706 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 23:44:12.261627   16706 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-701527 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 23:44:12.434374   16706 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 23:44:12.434519   16706 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-701527 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 23:44:12.543438   16706 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 23:44:12.817631   16706 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 23:44:12.942571   16706 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 23:44:12.942682   16706 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 23:44:13.043413   16706 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 23:44:13.416601   16706 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 23:44:13.469795   16706 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 23:44:13.647070   16706 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 23:44:13.852980   16706 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 23:44:13.853506   16706 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 23:44:13.856997   16706 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 23:44:13.859361   16706 out.go:235]   - Booting up control plane ...
	I1209 23:44:13.859460   16706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 23:44:13.859575   16706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 23:44:13.860155   16706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 23:44:13.868977   16706 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 23:44:13.874402   16706 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 23:44:13.874466   16706 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 23:44:13.953671   16706 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 23:44:13.953832   16706 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 23:44:14.955083   16706 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001498803s
	I1209 23:44:14.955178   16706 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 23:44:18.956956   16706 kubeadm.go:310] [api-check] The API server is healthy after 4.00192303s
	I1209 23:44:18.968081   16706 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 23:44:18.978294   16706 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 23:44:18.996373   16706 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 23:44:18.996648   16706 kubeadm.go:310] [mark-control-plane] Marking the node addons-701527 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 23:44:19.004440   16706 kubeadm.go:310] [bootstrap-token] Using token: o9d4gk.ol9z315ujhqpyjtd
	I1209 23:44:19.006152   16706 out.go:235]   - Configuring RBAC rules ...
	I1209 23:44:19.006278   16706 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 23:44:19.011401   16706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 23:44:19.016658   16706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 23:44:19.018951   16706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 23:44:19.021357   16706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 23:44:19.023596   16706 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 23:44:19.364054   16706 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 23:44:19.779522   16706 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 23:44:20.362886   16706 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 23:44:20.363691   16706 kubeadm.go:310] 
	I1209 23:44:20.363780   16706 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 23:44:20.363790   16706 kubeadm.go:310] 
	I1209 23:44:20.363886   16706 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 23:44:20.363897   16706 kubeadm.go:310] 
	I1209 23:44:20.363973   16706 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 23:44:20.364112   16706 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 23:44:20.364183   16706 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 23:44:20.364193   16706 kubeadm.go:310] 
	I1209 23:44:20.364267   16706 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 23:44:20.364276   16706 kubeadm.go:310] 
	I1209 23:44:20.364339   16706 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 23:44:20.364348   16706 kubeadm.go:310] 
	I1209 23:44:20.364444   16706 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 23:44:20.364574   16706 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 23:44:20.364672   16706 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 23:44:20.364685   16706 kubeadm.go:310] 
	I1209 23:44:20.364797   16706 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 23:44:20.364908   16706 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 23:44:20.364922   16706 kubeadm.go:310] 
	I1209 23:44:20.365053   16706 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o9d4gk.ol9z315ujhqpyjtd \
	I1209 23:44:20.365183   16706 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d276d577512ade74a5109f58b5778ce04abe39c8c67256076dac49c0e0be586a \
	I1209 23:44:20.365203   16706 kubeadm.go:310] 	--control-plane 
	I1209 23:44:20.365209   16706 kubeadm.go:310] 
	I1209 23:44:20.365279   16706 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 23:44:20.365288   16706 kubeadm.go:310] 
	I1209 23:44:20.365354   16706 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o9d4gk.ol9z315ujhqpyjtd \
	I1209 23:44:20.365444   16706 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d276d577512ade74a5109f58b5778ce04abe39c8c67256076dac49c0e0be586a 
	I1209 23:44:20.366742   16706 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1209 23:44:20.366841   16706 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 23:44:20.366865   16706 cni.go:84] Creating CNI manager for ""
	I1209 23:44:20.366871   16706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:44:20.368931   16706 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1209 23:44:20.370244   16706 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 23:44:20.373878   16706 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1209 23:44:20.373902   16706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 23:44:20.390148   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 23:44:20.580225   16706 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 23:44:20.580385   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:20.580408   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-701527 minikube.k8s.io/updated_at=2024_12_09T23_44_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=addons-701527 minikube.k8s.io/primary=true
	I1209 23:44:20.587158   16706 ops.go:34] apiserver oom_adj: -16
	I1209 23:44:20.647776   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:21.148758   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:21.647990   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:22.148710   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:22.648746   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:23.148720   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:23.647984   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:24.148143   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:24.648070   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:25.148745   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:25.247786   16706 kubeadm.go:1113] duration metric: took 4.667456543s to wait for elevateKubeSystemPrivileges
	I1209 23:44:25.247829   16706 kubeadm.go:394] duration metric: took 14.319432421s to StartCluster
	I1209 23:44:25.247853   16706 settings.go:142] acquiring lock: {Name:mk3fbdb3180100a5b99ca4ec9ec726523f75f361 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:25.247979   16706 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-8617/kubeconfig
	I1209 23:44:25.248459   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/kubeconfig: {Name:mk0b7b47a4c3647122bd54439d50dda394f7edf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:25.248680   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 23:44:25.248708   16706 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:44:25.248772   16706 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 23:44:25.248917   16706 addons.go:69] Setting yakd=true in profile "addons-701527"
	I1209 23:44:25.248925   16706 addons.go:69] Setting ingress-dns=true in profile "addons-701527"
	I1209 23:44:25.248940   16706 addons.go:234] Setting addon yakd=true in "addons-701527"
	I1209 23:44:25.248944   16706 addons.go:234] Setting addon ingress-dns=true in "addons-701527"
	I1209 23:44:25.248943   16706 config.go:182] Loaded profile config "addons-701527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:44:25.248942   16706 addons.go:69] Setting registry=true in profile "addons-701527"
	I1209 23:44:25.248962   16706 addons.go:234] Setting addon registry=true in "addons-701527"
	I1209 23:44:25.248972   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.248982   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.248967   16706 addons.go:69] Setting metrics-server=true in profile "addons-701527"
	I1209 23:44:25.249023   16706 addons.go:234] Setting addon metrics-server=true in "addons-701527"
	I1209 23:44:25.249036   16706 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-701527"
	I1209 23:44:25.249052   16706 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-701527"
	I1209 23:44:25.249066   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.249074   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.249079   16706 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-701527"
	I1209 23:44:25.249101   16706 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-701527"
	I1209 23:44:25.249406   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.249530   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.249539   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.249550   16706 addons.go:69] Setting volcano=true in profile "addons-701527"
	I1209 23:44:25.249564   16706 addons.go:234] Setting addon volcano=true in "addons-701527"
	I1209 23:44:25.249568   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.249566   16706 addons.go:69] Setting volumesnapshots=true in profile "addons-701527"
	I1209 23:44:25.249583   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.249595   16706 addons.go:234] Setting addon volumesnapshots=true in "addons-701527"
	I1209 23:44:25.249623   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.249629   16706 addons.go:69] Setting storage-provisioner=true in profile "addons-701527"
	I1209 23:44:25.249647   16706 addons.go:234] Setting addon storage-provisioner=true in "addons-701527"
	I1209 23:44:25.249706   16706 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-701527"
	I1209 23:44:25.249748   16706 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-701527"
	I1209 23:44:25.249781   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.250037   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.250063   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.250267   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.250429   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.251229   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.249539   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.251685   16706 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-701527"
	I1209 23:44:25.251704   16706 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-701527"
	I1209 23:44:25.251733   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.252195   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.253278   16706 addons.go:69] Setting inspektor-gadget=true in profile "addons-701527"
	I1209 23:44:25.253341   16706 addons.go:234] Setting addon inspektor-gadget=true in "addons-701527"
	I1209 23:44:25.253388   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.248996   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.255656   16706 addons.go:69] Setting cloud-spanner=true in profile "addons-701527"
	I1209 23:44:25.255723   16706 addons.go:234] Setting addon cloud-spanner=true in "addons-701527"
	I1209 23:44:25.255770   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.256121   16706 addons.go:69] Setting default-storageclass=true in profile "addons-701527"
	I1209 23:44:25.256155   16706 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-701527"
	I1209 23:44:25.256491   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.256733   16706 out.go:177] * Verifying Kubernetes components...
	I1209 23:44:25.256932   16706 addons.go:69] Setting ingress=true in profile "addons-701527"
	I1209 23:44:25.256961   16706 addons.go:234] Setting addon ingress=true in "addons-701527"
	I1209 23:44:25.257005   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.257017   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.257237   16706 addons.go:69] Setting gcp-auth=true in profile "addons-701527"
	I1209 23:44:25.257262   16706 mustload.go:65] Loading cluster: addons-701527
	I1209 23:44:25.259275   16706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:44:25.279845   16706 config.go:182] Loaded profile config "addons-701527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:44:25.279984   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.280173   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.280735   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.281123   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.300350   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 23:44:25.300349   16706 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 23:44:25.301935   16706 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 23:44:25.301966   16706 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 23:44:25.303909   16706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 23:44:25.303930   16706 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 23:44:25.303992   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.304204   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	W1209 23:44:25.308278   16706 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 23:44:25.317261   16706 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-701527"
	I1209 23:44:25.317326   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.317758   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.341552   16706 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1209 23:44:25.341724   16706 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:44:25.343412   16706 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 23:44:25.343443   16706 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 23:44:25.343519   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.343910   16706 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:44:25.343927   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:44:25.343976   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.351848   16706 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1209 23:44:25.352010   16706 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 23:44:25.353108   16706 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1209 23:44:25.353129   16706 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1209 23:44:25.353195   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.353592   16706 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 23:44:25.353609   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 23:44:25.353660   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.354164   16706 addons.go:234] Setting addon default-storageclass=true in "addons-701527"
	I1209 23:44:25.354208   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.354648   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.356133   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.358165   16706 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1209 23:44:25.358234   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 23:44:25.358285   16706 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:44:25.360658   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.360793   16706 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 23:44:25.360810   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 23:44:25.360863   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.361860   16706 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1209 23:44:25.363199   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 23:44:25.363352   16706 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1209 23:44:25.363368   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 23:44:25.363416   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.365955   16706 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1209 23:44:25.366018   16706 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1209 23:44:25.367132   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 23:44:25.368461   16706 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 23:44:25.368480   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1209 23:44:25.368533   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.368720   16706 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:44:25.370280   16706 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 23:44:25.370298   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 23:44:25.370345   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.370557   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 23:44:25.372722   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 23:44:25.374031   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 23:44:25.375330   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 23:44:25.376695   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 23:44:25.377993   16706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 23:44:25.378013   16706 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 23:44:25.378038   16706 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1209 23:44:25.378078   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.381729   16706 out.go:177]   - Using image docker.io/registry:2.8.3
	I1209 23:44:25.384600   16706 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 23:44:25.384619   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 23:44:25.384682   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.387191   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.389354   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.399963   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.409817   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.419799   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.423352   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.424435   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.424818   16706 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 23:44:25.427566   16706 out.go:177]   - Using image docker.io/busybox:stable
	I1209 23:44:25.427963   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.429485   16706 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 23:44:25.429504   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 23:44:25.429556   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.431663   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.437563   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.445936   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.446296   16706 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:44:25.446311   16706 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:44:25.446349   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.452620   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.463271   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.699889   16706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:44:25.699978   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 23:44:25.704287   16706 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 23:44:25.704318   16706 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 23:44:25.710611   16706 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 23:44:25.710632   16706 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 23:44:25.797291   16706 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 23:44:25.797321   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1209 23:44:25.808292   16706 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 23:44:25.808344   16706 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 23:44:25.890307   16706 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 23:44:25.890352   16706 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 23:44:25.892144   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 23:44:25.899942   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 23:44:25.988335   16706 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 23:44:25.988430   16706 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 23:44:25.989261   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 23:44:25.999726   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:44:26.002492   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 23:44:26.092005   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 23:44:26.095886   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 23:44:26.101953   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:44:26.103720   16706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 23:44:26.103747   16706 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 23:44:26.189579   16706 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 23:44:26.189661   16706 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 23:44:26.193203   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 23:44:26.206605   16706 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 23:44:26.206632   16706 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 23:44:26.291032   16706 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 23:44:26.291123   16706 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 23:44:26.393546   16706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 23:44:26.393627   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 23:44:26.492758   16706 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 23:44:26.492785   16706 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 23:44:26.585623   16706 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 23:44:26.585707   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 23:44:26.595586   16706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 23:44:26.595616   16706 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 23:44:26.803673   16706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 23:44:26.803769   16706 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 23:44:26.886204   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 23:44:26.892853   16706 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 23:44:26.892942   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 23:44:27.085506   16706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 23:44:27.085589   16706 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 23:44:27.095484   16706 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:44:27.095656   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 23:44:27.191048   16706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:44:27.191078   16706 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 23:44:27.286245   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 23:44:27.386109   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:44:27.486739   16706 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.786717562s)
	I1209 23:44:27.486781   16706 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1209 23:44:27.488143   16706 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.788219446s)
	I1209 23:44:27.489010   16706 node_ready.go:35] waiting up to 6m0s for node "addons-701527" to be "Ready" ...
	I1209 23:44:27.489209   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.597037926s)
	I1209 23:44:27.489258   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.589227728s)
	I1209 23:44:27.492691   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:44:27.497378   16706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 23:44:27.497409   16706 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 23:44:27.885194   16706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 23:44:27.885290   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 23:44:28.400451   16706 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-701527" context rescaled to 1 replicas
	I1209 23:44:28.490965   16706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 23:44:28.491054   16706 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 23:44:28.599139   16706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 23:44:28.599218   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 23:44:29.085139   16706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 23:44:29.085226   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 23:44:29.302372   16706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 23:44:29.302400   16706 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 23:44:29.493438   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 23:44:29.500410   16706 node_ready.go:53] node "addons-701527" has status "Ready":"False"
	I1209 23:44:30.088243   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.098894921s)
	I1209 23:44:30.088318   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.0885675s)
	I1209 23:44:30.088422   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.08589924s)
	I1209 23:44:30.088456   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.996427243s)
	I1209 23:44:31.600721   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.504791899s)
	I1209 23:44:31.600756   16706 addons.go:475] Verifying addon ingress=true in "addons-701527"
	I1209 23:44:31.600803   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.407521491s)
	I1209 23:44:31.600753   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.498729452s)
	I1209 23:44:31.600906   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.71462024s)
	I1209 23:44:31.600932   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.314659518s)
	I1209 23:44:31.601253   16706 addons.go:475] Verifying addon registry=true in "addons-701527"
	I1209 23:44:31.603344   16706 out.go:177] * Verifying ingress addon...
	I1209 23:44:31.603407   16706 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-701527 service yakd-dashboard -n yakd-dashboard
	
	I1209 23:44:31.603355   16706 out.go:177] * Verifying registry addon...
	I1209 23:44:31.606220   16706 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 23:44:31.606220   16706 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 23:44:31.612203   16706 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 23:44:31.612230   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:31.612820   16706 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 23:44:31.612874   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:31.992303   16706 node_ready.go:53] node "addons-701527" has status "Ready":"False"
	I1209 23:44:32.113758   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:32.114382   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:32.389631   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.003472062s)
	W1209 23:44:32.389721   16706 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 23:44:32.389727   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.897000638s)
	I1209 23:44:32.389759   16706 addons.go:475] Verifying addon metrics-server=true in "addons-701527"
	I1209 23:44:32.389770   16706 retry.go:31] will retry after 347.537129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 23:44:32.595972   16706 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 23:44:32.596059   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:32.610657   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:32.611320   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:32.617483   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:32.737886   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:44:32.803886   16706 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 23:44:32.886563   16706 addons.go:234] Setting addon gcp-auth=true in "addons-701527"
	I1209 23:44:32.886660   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:32.887184   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:32.913504   16706 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 23:44:32.913571   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:32.936796   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:33.111862   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:33.112513   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:33.218154   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.724661985s)
	I1209 23:44:33.218193   16706 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-701527"
	I1209 23:44:33.219771   16706 out.go:177] * Verifying csi-hostpath-driver addon...
	I1209 23:44:33.222152   16706 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 23:44:33.286957   16706 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 23:44:33.286990   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:33.609820   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:33.610492   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:33.725583   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:34.109696   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:34.110302   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:34.225622   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:34.492032   16706 node_ready.go:53] node "addons-701527" has status "Ready":"False"
	I1209 23:44:34.609390   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:34.609950   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:34.725647   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:35.109179   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:35.109569   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:35.225896   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:35.578139   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.840211211s)
	I1209 23:44:35.578229   16706 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.664696486s)
	I1209 23:44:35.580291   16706 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:44:35.581878   16706 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 23:44:35.583280   16706 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 23:44:35.583298   16706 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 23:44:35.600484   16706 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 23:44:35.600506   16706 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 23:44:35.610201   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:35.610608   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:35.617560   16706 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 23:44:35.617581   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 23:44:35.634418   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 23:44:35.725192   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:35.946409   16706 addons.go:475] Verifying addon gcp-auth=true in "addons-701527"
	I1209 23:44:35.948671   16706 out.go:177] * Verifying gcp-auth addon...
	I1209 23:44:35.950763   16706 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 23:44:35.986469   16706 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 23:44:35.986496   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:36.109444   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:36.109945   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:36.225289   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:36.453980   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:36.492196   16706 node_ready.go:53] node "addons-701527" has status "Ready":"False"
	I1209 23:44:36.609998   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:36.610522   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:36.726274   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:36.953531   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:37.109254   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:37.109769   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:37.225962   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:37.456192   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:37.609552   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:37.609935   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:37.725494   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:37.954045   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:38.109642   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:38.110070   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:38.226034   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:38.453885   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:38.492238   16706 node_ready.go:53] node "addons-701527" has status "Ready":"False"
	I1209 23:44:38.609839   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:38.610494   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:38.726122   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:38.953821   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:39.110268   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:39.110581   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:39.225684   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:39.454378   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:39.609477   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:39.609982   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:39.725253   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:39.953600   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:40.111198   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:40.111478   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:40.225140   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:40.453623   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:40.609347   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:40.609816   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:40.725411   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:40.953808   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:40.992323   16706 node_ready.go:53] node "addons-701527" has status "Ready":"False"
	I1209 23:44:41.109879   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:41.110412   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:41.225583   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:41.454113   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:41.609926   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:41.610482   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:41.725721   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:41.954108   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:42.108907   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:42.109441   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:42.225590   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:42.453432   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:42.609439   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:42.609833   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:42.725652   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:42.953911   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:43.109597   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:43.110045   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:43.225305   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:43.453760   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:43.492096   16706 node_ready.go:53] node "addons-701527" has status "Ready":"False"
	I1209 23:44:43.609626   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:43.610177   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:43.725393   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:43.953865   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:44.189886   16706 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 23:44:44.189913   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:44.190696   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:44.229151   16706 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 23:44:44.229175   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:44.486102   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:44.493006   16706 node_ready.go:49] node "addons-701527" has status "Ready":"True"
	I1209 23:44:44.493036   16706 node_ready.go:38] duration metric: took 17.003992506s for node "addons-701527" to be "Ready" ...
	I1209 23:44:44.493048   16706 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:44:44.502184   16706 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:44.611984   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:44.612161   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:44.791840   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:44.987270   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:45.109907   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:45.110172   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:45.226905   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:45.454770   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:45.610167   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:45.610491   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:45.726605   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:45.954279   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:46.110630   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:46.111325   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:46.226168   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:46.454490   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:46.508175   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:46.610378   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:46.610715   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:46.726743   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:46.953554   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:47.110778   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:47.111109   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:47.226240   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:47.453593   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:47.609985   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:47.610566   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:47.727362   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:47.986139   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:48.110808   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:48.111553   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:48.227340   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:48.484867   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:48.610678   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:48.611072   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:48.727752   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:48.954685   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:49.007794   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:49.110148   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:49.110513   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:49.226545   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:49.454157   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:49.610550   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:49.610572   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:49.726822   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:49.953679   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:50.110362   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:50.110802   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:50.226841   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:50.453595   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:50.610542   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:50.611195   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:50.727391   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:50.986826   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:51.110670   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:51.110859   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:51.229633   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:51.486197   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:51.508677   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:51.610136   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:51.610219   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:51.727135   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:51.955088   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:52.109501   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:52.110177   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:52.227701   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:52.454536   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:52.609603   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:52.609801   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:52.726852   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:52.954576   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:53.110453   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:53.110684   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:53.226900   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:53.455132   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:53.609662   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:53.610291   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:53.725808   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:53.954032   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:54.009017   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:54.110061   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:54.110327   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:54.225947   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:54.454097   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:54.610729   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:54.611181   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:54.727450   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:54.954124   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:55.109905   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:55.110285   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:55.226058   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:55.454202   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:55.610026   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:55.610402   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:55.726626   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:55.955833   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:56.110706   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:56.111206   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:56.226054   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:56.454614   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:56.507610   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:56.609652   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:56.610317   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:56.726352   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:56.954597   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:57.186969   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:57.188518   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:57.293787   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:57.453704   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:57.610681   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:57.611022   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:57.725858   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:57.954171   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:58.110356   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:58.110982   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:58.227295   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:58.454207   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:58.610566   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:58.610956   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:58.726755   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:58.954404   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:59.008515   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:59.116001   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:59.116281   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:59.289428   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:59.486964   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:59.610853   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:59.613181   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:59.725578   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:59.987214   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:00.110656   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:00.110777   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:00.227667   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:00.485331   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:00.610382   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:00.610922   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:00.727135   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:00.954131   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:01.008852   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:01.110336   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:01.110598   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:01.227392   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:01.454584   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:01.610044   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:01.610826   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:01.726704   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:01.953980   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:02.110740   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:02.111246   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:02.226708   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:02.454770   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:02.610315   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:02.610823   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:02.726939   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:02.954271   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:03.110264   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:03.110519   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:03.227481   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:03.454933   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:03.507645   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:03.609937   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:03.610172   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:03.728750   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:03.953698   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:04.110074   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:04.110207   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:04.226245   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:04.485986   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:04.611573   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:04.611892   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:04.725912   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:04.954232   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:05.110532   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:05.110873   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:05.226804   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:05.454234   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:05.508810   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:05.609972   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:05.610275   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:05.726651   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:05.954444   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:06.111136   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:06.111241   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:06.226210   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:06.454114   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:06.610821   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:06.610970   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:06.726009   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:06.954289   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:07.109933   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:07.110497   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:07.227485   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:07.454831   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:07.610355   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:07.610731   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:07.727265   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:07.954413   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:08.008555   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:08.110151   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:08.110463   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:08.227098   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:08.454024   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:08.611229   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:08.611593   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:08.726744   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:08.954888   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:09.109842   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:09.110202   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:09.226093   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:09.454929   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:09.610203   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:09.610424   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:09.788109   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:09.953915   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:10.109788   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:10.110782   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:10.227211   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:10.454239   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:10.507825   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:10.610493   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:10.610650   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:10.727228   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:10.954350   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:11.110257   16706 kapi.go:107] duration metric: took 39.504036002s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 23:45:11.110459   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:11.226714   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:11.454402   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:11.610823   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:11.726632   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:11.953881   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:12.111412   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:12.226478   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:12.453825   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:12.610441   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:12.729169   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:12.986298   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:13.008207   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:13.111784   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:13.227270   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:13.486901   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:13.686831   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:13.787008   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:13.989743   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:14.187209   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:14.304745   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:14.497017   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:14.687138   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:14.786629   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:14.986361   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:15.008765   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:15.110203   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:15.226213   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:15.485900   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:15.610528   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:15.727011   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:15.986405   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:16.110606   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:16.226680   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:16.486258   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:16.610101   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:16.726381   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:16.954763   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:17.110562   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:17.227011   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:17.454118   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:17.508070   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:17.610569   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:17.727021   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:17.985549   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:18.111059   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:18.226636   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:18.454779   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:18.611290   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:18.726100   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:18.954236   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:19.111339   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:19.225865   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:19.454475   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:19.508224   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:19.610888   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:19.727264   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:19.954340   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:20.111166   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:20.225865   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:20.454474   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:20.611018   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:20.726369   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:20.954345   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:21.110668   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:21.226824   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:21.502904   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:21.508491   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:21.610511   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:21.727429   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:21.954794   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:22.109541   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:22.226964   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:22.454511   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:22.609987   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:22.726099   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:22.985193   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:23.111150   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:23.227018   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:23.455765   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:23.609579   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:23.727338   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:23.953985   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:24.007709   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:24.110005   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:24.226887   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:24.456358   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:24.508028   16706 pod_ready.go:93] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:24.508051   16706 pod_ready.go:82] duration metric: took 40.005833308s for pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.508061   16706 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cxp92" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.512157   16706 pod_ready.go:93] pod "coredns-7c65d6cfc9-cxp92" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:24.512175   16706 pod_ready.go:82] duration metric: took 4.108999ms for pod "coredns-7c65d6cfc9-cxp92" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.512196   16706 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.515921   16706 pod_ready.go:93] pod "etcd-addons-701527" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:24.515939   16706 pod_ready.go:82] duration metric: took 3.737734ms for pod "etcd-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.515951   16706 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.519779   16706 pod_ready.go:93] pod "kube-apiserver-addons-701527" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:24.519798   16706 pod_ready.go:82] duration metric: took 3.840651ms for pod "kube-apiserver-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.519806   16706 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.523297   16706 pod_ready.go:93] pod "kube-controller-manager-addons-701527" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:24.523313   16706 pod_ready.go:82] duration metric: took 3.501389ms for pod "kube-controller-manager-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.523323   16706 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qh6vp" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.612318   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:24.726480   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:24.906313   16706 pod_ready.go:93] pod "kube-proxy-qh6vp" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:24.906400   16706 pod_ready.go:82] duration metric: took 383.068779ms for pod "kube-proxy-qh6vp" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.906432   16706 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.993241   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:25.111726   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:25.288104   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:25.306784   16706 pod_ready.go:93] pod "kube-scheduler-addons-701527" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:25.306810   16706 pod_ready.go:82] duration metric: took 400.364544ms for pod "kube-scheduler-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:25.306823   16706 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:25.454216   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:25.610062   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:25.726332   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:25.953814   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:26.109731   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:26.226866   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:26.453797   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:26.610197   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:26.727762   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:26.954147   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:27.110568   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:27.227573   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:27.312826   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:27.454087   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:27.610114   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:27.726580   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:27.986307   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:28.187285   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:28.288662   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:28.486793   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:28.689712   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:28.791247   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:28.993410   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:29.190265   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:29.288413   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:29.389954   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:29.487174   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:29.610459   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:29.794460   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:29.986908   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:30.110135   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:30.287840   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:30.487998   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:30.611464   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:30.727430   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:30.954200   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:31.110513   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:31.227291   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:31.454444   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:31.610805   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:31.727626   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:31.813109   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:31.953660   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:32.110120   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:32.226314   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:32.454258   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:32.611347   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:32.726938   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:32.954369   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:33.110733   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:33.227241   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:33.454833   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:33.610768   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:33.790082   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:33.813471   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:33.954444   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:34.110946   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:34.226884   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:34.486905   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:34.610112   16706 kapi.go:107] duration metric: took 1m3.003891856s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 23:45:34.726073   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:34.954035   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:35.226793   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:35.485801   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:35.727323   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:35.954765   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:36.227255   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:36.313068   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:36.455454   16706 kapi.go:107] duration metric: took 1m0.504686907s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 23:45:36.457409   16706 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-701527 cluster.
	I1209 23:45:36.458875   16706 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 23:45:36.460483   16706 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 23:45:36.727680   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:37.227455   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:37.726516   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:38.227443   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:38.726483   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:38.812762   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:39.227633   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:39.726656   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:40.226930   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:40.726744   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:40.813228   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:41.226121   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:41.726703   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:42.227062   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:42.726384   16706 kapi.go:107] duration metric: took 1m9.504230323s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 23:45:42.728398   16706 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, inspektor-gadget, cloud-spanner, ingress-dns, default-storageclass, storage-provisioner, yakd, storage-provisioner-rancher, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1209 23:45:42.729932   16706 addons.go:510] duration metric: took 1m17.481160441s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin inspektor-gadget cloud-spanner ingress-dns default-storageclass storage-provisioner yakd storage-provisioner-rancher metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1209 23:45:43.312724   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:45.812817   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:47.812855   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:50.313364   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:52.812734   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:54.813143   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:57.314128   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:59.812298   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:46:01.812878   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:46:03.812913   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:46:04.812574   16706 pod_ready.go:93] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"True"
	I1209 23:46:04.812598   16706 pod_ready.go:82] duration metric: took 39.505766562s for pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace to be "Ready" ...
	I1209 23:46:04.812608   16706 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-55d28" in "kube-system" namespace to be "Ready" ...
	I1209 23:46:04.817020   16706 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-55d28" in "kube-system" namespace has status "Ready":"True"
	I1209 23:46:04.817041   16706 pod_ready.go:82] duration metric: took 4.42726ms for pod "nvidia-device-plugin-daemonset-55d28" in "kube-system" namespace to be "Ready" ...
	I1209 23:46:04.817059   16706 pod_ready.go:39] duration metric: took 1m20.32397868s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:46:04.817076   16706 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:46:04.817110   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:46:04.817161   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:46:04.852318   16706 cri.go:89] found id: "2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b"
	I1209 23:46:04.852349   16706 cri.go:89] found id: ""
	I1209 23:46:04.852356   16706 logs.go:282] 1 containers: [2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b]
	I1209 23:46:04.852410   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:04.855590   16706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 23:46:04.855653   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:46:04.887943   16706 cri.go:89] found id: "ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e"
	I1209 23:46:04.887969   16706 cri.go:89] found id: ""
	I1209 23:46:04.887978   16706 logs.go:282] 1 containers: [ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e]
	I1209 23:46:04.888022   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:04.891342   16706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 23:46:04.891417   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:46:04.923260   16706 cri.go:89] found id: "077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce"
	I1209 23:46:04.923285   16706 cri.go:89] found id: ""
	I1209 23:46:04.923293   16706 logs.go:282] 1 containers: [077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce]
	I1209 23:46:04.923352   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:04.926703   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:46:04.926762   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:46:04.960465   16706 cri.go:89] found id: "ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b"
	I1209 23:46:04.960487   16706 cri.go:89] found id: ""
	I1209 23:46:04.960495   16706 logs.go:282] 1 containers: [ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b]
	I1209 23:46:04.960541   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:04.963793   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:46:04.963849   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:46:04.996394   16706 cri.go:89] found id: "82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b"
	I1209 23:46:04.996415   16706 cri.go:89] found id: ""
	I1209 23:46:04.996422   16706 logs.go:282] 1 containers: [82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b]
	I1209 23:46:04.996473   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:04.999750   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:46:04.999800   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:46:05.032205   16706 cri.go:89] found id: "25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b"
	I1209 23:46:05.032243   16706 cri.go:89] found id: ""
	I1209 23:46:05.032252   16706 logs.go:282] 1 containers: [25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b]
	I1209 23:46:05.032311   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:05.035468   16706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 23:46:05.035552   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:46:05.068521   16706 cri.go:89] found id: "4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697"
	I1209 23:46:05.068542   16706 cri.go:89] found id: ""
	I1209 23:46:05.068550   16706 logs.go:282] 1 containers: [4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697]
	I1209 23:46:05.068594   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:05.071902   16706 logs.go:123] Gathering logs for CRI-O ...
	I1209 23:46:05.071928   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 23:46:05.147074   16706 logs.go:123] Gathering logs for container status ...
	I1209 23:46:05.147109   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 23:46:05.189939   16706 logs.go:123] Gathering logs for dmesg ...
	I1209 23:46:05.189969   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:46:05.202543   16706 logs.go:123] Gathering logs for coredns [077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce] ...
	I1209 23:46:05.202572   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce"
	I1209 23:46:05.236746   16706 logs.go:123] Gathering logs for kube-proxy [82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b] ...
	I1209 23:46:05.236775   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b"
	I1209 23:46:05.269131   16706 logs.go:123] Gathering logs for etcd [ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e] ...
	I1209 23:46:05.269159   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e"
	I1209 23:46:05.312923   16706 logs.go:123] Gathering logs for kube-scheduler [ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b] ...
	I1209 23:46:05.312961   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b"
	I1209 23:46:05.350880   16706 logs.go:123] Gathering logs for kube-controller-manager [25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b] ...
	I1209 23:46:05.350914   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b"
	I1209 23:46:05.409010   16706 logs.go:123] Gathering logs for kindnet [4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697] ...
	I1209 23:46:05.409049   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697"
	I1209 23:46:05.442588   16706 logs.go:123] Gathering logs for kubelet ...
	I1209 23:46:05.442614   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 23:46:05.522226   16706 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:46:05.522261   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 23:46:05.618086   16706 logs.go:123] Gathering logs for kube-apiserver [2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b] ...
	I1209 23:46:05.618113   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b"
	I1209 23:46:08.162821   16706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:46:08.177204   16706 api_server.go:72] duration metric: took 1m42.928457075s to wait for apiserver process to appear ...
	I1209 23:46:08.177233   16706 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:46:08.177269   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:46:08.177324   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:46:08.210360   16706 cri.go:89] found id: "2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b"
	I1209 23:46:08.210382   16706 cri.go:89] found id: ""
	I1209 23:46:08.210391   16706 logs.go:282] 1 containers: [2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b]
	I1209 23:46:08.210449   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:08.213668   16706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 23:46:08.213730   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:46:08.246282   16706 cri.go:89] found id: "ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e"
	I1209 23:46:08.246306   16706 cri.go:89] found id: ""
	I1209 23:46:08.246314   16706 logs.go:282] 1 containers: [ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e]
	I1209 23:46:08.246356   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:08.249585   16706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 23:46:08.249642   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:46:08.281761   16706 cri.go:89] found id: "077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce"
	I1209 23:46:08.281784   16706 cri.go:89] found id: ""
	I1209 23:46:08.281793   16706 logs.go:282] 1 containers: [077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce]
	I1209 23:46:08.281838   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:08.285092   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:46:08.285145   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:46:08.317956   16706 cri.go:89] found id: "ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b"
	I1209 23:46:08.317981   16706 cri.go:89] found id: ""
	I1209 23:46:08.317990   16706 logs.go:282] 1 containers: [ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b]
	I1209 23:46:08.318037   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:08.321242   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:46:08.321310   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:46:08.354116   16706 cri.go:89] found id: "82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b"
	I1209 23:46:08.354139   16706 cri.go:89] found id: ""
	I1209 23:46:08.354147   16706 logs.go:282] 1 containers: [82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b]
	I1209 23:46:08.354192   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:08.357614   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:46:08.357677   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:46:08.391134   16706 cri.go:89] found id: "25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b"
	I1209 23:46:08.391158   16706 cri.go:89] found id: ""
	I1209 23:46:08.391167   16706 logs.go:282] 1 containers: [25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b]
	I1209 23:46:08.391221   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:08.394525   16706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 23:46:08.394585   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:46:08.428821   16706 cri.go:89] found id: "4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697"
	I1209 23:46:08.428846   16706 cri.go:89] found id: ""
	I1209 23:46:08.428853   16706 logs.go:282] 1 containers: [4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697]
	I1209 23:46:08.428901   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:08.432240   16706 logs.go:123] Gathering logs for kubelet ...
	I1209 23:46:08.432262   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 23:46:08.513345   16706 logs.go:123] Gathering logs for dmesg ...
	I1209 23:46:08.513381   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:46:08.525375   16706 logs.go:123] Gathering logs for kube-apiserver [2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b] ...
	I1209 23:46:08.525400   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b"
	I1209 23:46:08.568002   16706 logs.go:123] Gathering logs for kube-scheduler [ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b] ...
	I1209 23:46:08.568032   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b"
	I1209 23:46:08.605653   16706 logs.go:123] Gathering logs for kube-proxy [82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b] ...
	I1209 23:46:08.605681   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b"
	I1209 23:46:08.637274   16706 logs.go:123] Gathering logs for CRI-O ...
	I1209 23:46:08.637299   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 23:46:08.709035   16706 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:46:08.709070   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 23:46:08.802753   16706 logs.go:123] Gathering logs for etcd [ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e] ...
	I1209 23:46:08.802781   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e"
	I1209 23:46:08.846140   16706 logs.go:123] Gathering logs for coredns [077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce] ...
	I1209 23:46:08.846172   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce"
	I1209 23:46:08.882461   16706 logs.go:123] Gathering logs for kube-controller-manager [25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b] ...
	I1209 23:46:08.882494   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b"
	I1209 23:46:08.937170   16706 logs.go:123] Gathering logs for kindnet [4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697] ...
	I1209 23:46:08.937202   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697"
	I1209 23:46:08.968760   16706 logs.go:123] Gathering logs for container status ...
	I1209 23:46:08.968792   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 23:46:11.510218   16706 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1209 23:46:11.515435   16706 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1209 23:46:11.516293   16706 api_server.go:141] control plane version: v1.31.2
	I1209 23:46:11.516318   16706 api_server.go:131] duration metric: took 3.339076574s to wait for apiserver health ...
	I1209 23:46:11.516329   16706 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:46:11.516356   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:46:11.516413   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:46:11.550773   16706 cri.go:89] found id: "2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b"
	I1209 23:46:11.550802   16706 cri.go:89] found id: ""
	I1209 23:46:11.550812   16706 logs.go:282] 1 containers: [2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b]
	I1209 23:46:11.550856   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:11.553990   16706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 23:46:11.554073   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:46:11.585952   16706 cri.go:89] found id: "ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e"
	I1209 23:46:11.585980   16706 cri.go:89] found id: ""
	I1209 23:46:11.585990   16706 logs.go:282] 1 containers: [ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e]
	I1209 23:46:11.586037   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:11.589340   16706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 23:46:11.589411   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:46:11.623262   16706 cri.go:89] found id: "077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce"
	I1209 23:46:11.623284   16706 cri.go:89] found id: ""
	I1209 23:46:11.623292   16706 logs.go:282] 1 containers: [077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce]
	I1209 23:46:11.623365   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:11.626752   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:46:11.626808   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:46:11.660667   16706 cri.go:89] found id: "ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b"
	I1209 23:46:11.660695   16706 cri.go:89] found id: ""
	I1209 23:46:11.660704   16706 logs.go:282] 1 containers: [ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b]
	I1209 23:46:11.660763   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:11.664205   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:46:11.664264   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:46:11.697948   16706 cri.go:89] found id: "82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b"
	I1209 23:46:11.697970   16706 cri.go:89] found id: ""
	I1209 23:46:11.697983   16706 logs.go:282] 1 containers: [82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b]
	I1209 23:46:11.698049   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:11.701640   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:46:11.701704   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:46:11.735675   16706 cri.go:89] found id: "25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b"
	I1209 23:46:11.735699   16706 cri.go:89] found id: ""
	I1209 23:46:11.735706   16706 logs.go:282] 1 containers: [25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b]
	I1209 23:46:11.735768   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:11.739420   16706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 23:46:11.739519   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:46:11.773718   16706 cri.go:89] found id: "4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697"
	I1209 23:46:11.773738   16706 cri.go:89] found id: ""
	I1209 23:46:11.773745   16706 logs.go:282] 1 containers: [4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697]
	I1209 23:46:11.773786   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:11.777093   16706 logs.go:123] Gathering logs for kubelet ...
	I1209 23:46:11.777117   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 23:46:11.857784   16706 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:46:11.857817   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 23:46:11.958361   16706 logs.go:123] Gathering logs for kube-apiserver [2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b] ...
	I1209 23:46:11.958392   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b"
	I1209 23:46:12.001302   16706 logs.go:123] Gathering logs for etcd [ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e] ...
	I1209 23:46:12.001338   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e"
	I1209 23:46:12.044811   16706 logs.go:123] Gathering logs for kube-scheduler [ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b] ...
	I1209 23:46:12.044841   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b"
	I1209 23:46:12.084081   16706 logs.go:123] Gathering logs for kindnet [4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697] ...
	I1209 23:46:12.084112   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697"
	I1209 23:46:12.117877   16706 logs.go:123] Gathering logs for dmesg ...
	I1209 23:46:12.117904   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:46:12.129780   16706 logs.go:123] Gathering logs for coredns [077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce] ...
	I1209 23:46:12.129807   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce"
	I1209 23:46:12.164421   16706 logs.go:123] Gathering logs for kube-proxy [82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b] ...
	I1209 23:46:12.164449   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b"
	I1209 23:46:12.199121   16706 logs.go:123] Gathering logs for kube-controller-manager [25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b] ...
	I1209 23:46:12.199153   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b"
	I1209 23:46:12.254981   16706 logs.go:123] Gathering logs for CRI-O ...
	I1209 23:46:12.255018   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 23:46:12.332099   16706 logs.go:123] Gathering logs for container status ...
	I1209 23:46:12.332150   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 23:46:14.886263   16706 system_pods.go:59] 19 kube-system pods found
	I1209 23:46:14.886302   16706 system_pods.go:61] "amd-gpu-device-plugin-d2s7j" [d66910fc-8153-4362-b58d-0c34ded7766f] Running
	I1209 23:46:14.886308   16706 system_pods.go:61] "coredns-7c65d6cfc9-cxp92" [747ebb1d-9978-4fa2-ab7e-103305601b72] Running
	I1209 23:46:14.886312   16706 system_pods.go:61] "csi-hostpath-attacher-0" [0b0e2443-64c0-4547-be9d-da1d058bf73d] Running
	I1209 23:46:14.886317   16706 system_pods.go:61] "csi-hostpath-resizer-0" [72a75e84-083b-4ef7-97db-f519225c8067] Running
	I1209 23:46:14.886320   16706 system_pods.go:61] "csi-hostpathplugin-zzv7p" [3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42] Running
	I1209 23:46:14.886323   16706 system_pods.go:61] "etcd-addons-701527" [c481e420-6a6c-40c8-a459-b2d1c2882635] Running
	I1209 23:46:14.886327   16706 system_pods.go:61] "kindnet-stv96" [257884a2-cdb5-4b33-a038-33b923fd7bc2] Running
	I1209 23:46:14.886330   16706 system_pods.go:61] "kube-apiserver-addons-701527" [8ad3ecac-c702-4d87-b7ae-05bd01c000a7] Running
	I1209 23:46:14.886334   16706 system_pods.go:61] "kube-controller-manager-addons-701527" [5aea4bee-b06d-48c0-9e63-fadf57fcfb4e] Running
	I1209 23:46:14.886337   16706 system_pods.go:61] "kube-ingress-dns-minikube" [a3ab45ca-887e-40ac-aa72-59e45aa061d9] Running
	I1209 23:46:14.886340   16706 system_pods.go:61] "kube-proxy-qh6vp" [c95618c3-d387-449b-8663-ee463b5f6629] Running
	I1209 23:46:14.886343   16706 system_pods.go:61] "kube-scheduler-addons-701527" [8d434eb3-1a1a-418c-9b15-bcaa57a93874] Running
	I1209 23:46:14.886346   16706 system_pods.go:61] "metrics-server-84c5f94fbc-5g27r" [9401b572-a33f-4211-a676-d07847671042] Running
	I1209 23:46:14.886349   16706 system_pods.go:61] "nvidia-device-plugin-daemonset-55d28" [8ebd5f2c-593c-4804-9e9f-91b53ea7fa82] Running
	I1209 23:46:14.886352   16706 system_pods.go:61] "registry-5cc95cd69-hqlfw" [e0b25e01-7672-4537-ae66-04da6fa6f483] Running
	I1209 23:46:14.886355   16706 system_pods.go:61] "registry-proxy-g2wbp" [1e1d2641-f760-4ea1-9dd2-8579da7521e1] Running
	I1209 23:46:14.886358   16706 system_pods.go:61] "snapshot-controller-56fcc65765-4c42c" [d6203375-ea1a-419a-966b-5e73e6464e19] Running
	I1209 23:46:14.886361   16706 system_pods.go:61] "snapshot-controller-56fcc65765-m42bz" [0cbc53f7-0944-4f38-b866-c24619921bf4] Running
	I1209 23:46:14.886363   16706 system_pods.go:61] "storage-provisioner" [1ad7ef52-a84e-493a-ba89-adee18311a9d] Running
	I1209 23:46:14.886369   16706 system_pods.go:74] duration metric: took 3.370034154s to wait for pod list to return data ...
	I1209 23:46:14.886379   16706 default_sa.go:34] waiting for default service account to be created ...
	I1209 23:46:14.888565   16706 default_sa.go:45] found service account: "default"
	I1209 23:46:14.888590   16706 default_sa.go:55] duration metric: took 2.20265ms for default service account to be created ...
	I1209 23:46:14.888601   16706 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 23:46:14.897479   16706 system_pods.go:86] 19 kube-system pods found
	I1209 23:46:14.897509   16706 system_pods.go:89] "amd-gpu-device-plugin-d2s7j" [d66910fc-8153-4362-b58d-0c34ded7766f] Running
	I1209 23:46:14.897515   16706 system_pods.go:89] "coredns-7c65d6cfc9-cxp92" [747ebb1d-9978-4fa2-ab7e-103305601b72] Running
	I1209 23:46:14.897519   16706 system_pods.go:89] "csi-hostpath-attacher-0" [0b0e2443-64c0-4547-be9d-da1d058bf73d] Running
	I1209 23:46:14.897523   16706 system_pods.go:89] "csi-hostpath-resizer-0" [72a75e84-083b-4ef7-97db-f519225c8067] Running
	I1209 23:46:14.897526   16706 system_pods.go:89] "csi-hostpathplugin-zzv7p" [3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42] Running
	I1209 23:46:14.897529   16706 system_pods.go:89] "etcd-addons-701527" [c481e420-6a6c-40c8-a459-b2d1c2882635] Running
	I1209 23:46:14.897533   16706 system_pods.go:89] "kindnet-stv96" [257884a2-cdb5-4b33-a038-33b923fd7bc2] Running
	I1209 23:46:14.897537   16706 system_pods.go:89] "kube-apiserver-addons-701527" [8ad3ecac-c702-4d87-b7ae-05bd01c000a7] Running
	I1209 23:46:14.897540   16706 system_pods.go:89] "kube-controller-manager-addons-701527" [5aea4bee-b06d-48c0-9e63-fadf57fcfb4e] Running
	I1209 23:46:14.897544   16706 system_pods.go:89] "kube-ingress-dns-minikube" [a3ab45ca-887e-40ac-aa72-59e45aa061d9] Running
	I1209 23:46:14.897548   16706 system_pods.go:89] "kube-proxy-qh6vp" [c95618c3-d387-449b-8663-ee463b5f6629] Running
	I1209 23:46:14.897558   16706 system_pods.go:89] "kube-scheduler-addons-701527" [8d434eb3-1a1a-418c-9b15-bcaa57a93874] Running
	I1209 23:46:14.897561   16706 system_pods.go:89] "metrics-server-84c5f94fbc-5g27r" [9401b572-a33f-4211-a676-d07847671042] Running
	I1209 23:46:14.897569   16706 system_pods.go:89] "nvidia-device-plugin-daemonset-55d28" [8ebd5f2c-593c-4804-9e9f-91b53ea7fa82] Running
	I1209 23:46:14.897574   16706 system_pods.go:89] "registry-5cc95cd69-hqlfw" [e0b25e01-7672-4537-ae66-04da6fa6f483] Running
	I1209 23:46:14.897580   16706 system_pods.go:89] "registry-proxy-g2wbp" [1e1d2641-f760-4ea1-9dd2-8579da7521e1] Running
	I1209 23:46:14.897583   16706 system_pods.go:89] "snapshot-controller-56fcc65765-4c42c" [d6203375-ea1a-419a-966b-5e73e6464e19] Running
	I1209 23:46:14.897588   16706 system_pods.go:89] "snapshot-controller-56fcc65765-m42bz" [0cbc53f7-0944-4f38-b866-c24619921bf4] Running
	I1209 23:46:14.897591   16706 system_pods.go:89] "storage-provisioner" [1ad7ef52-a84e-493a-ba89-adee18311a9d] Running
	I1209 23:46:14.897597   16706 system_pods.go:126] duration metric: took 8.990695ms to wait for k8s-apps to be running ...
	I1209 23:46:14.897605   16706 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 23:46:14.897686   16706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:46:14.908730   16706 system_svc.go:56] duration metric: took 11.112132ms WaitForService to wait for kubelet
	I1209 23:46:14.908758   16706 kubeadm.go:582] duration metric: took 1m49.660016176s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:46:14.908784   16706 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:46:14.911833   16706 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1209 23:46:14.911858   16706 node_conditions.go:123] node cpu capacity is 8
	I1209 23:46:14.911870   16706 node_conditions.go:105] duration metric: took 3.080426ms to run NodePressure ...
	I1209 23:46:14.911880   16706 start.go:241] waiting for startup goroutines ...
	I1209 23:46:14.911887   16706 start.go:246] waiting for cluster config update ...
	I1209 23:46:14.911901   16706 start.go:255] writing updated cluster config ...
	I1209 23:46:14.912162   16706 ssh_runner.go:195] Run: rm -f paused
	I1209 23:46:14.962210   16706 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 23:46:14.964349   16706 out.go:177] * Done! kubectl is now configured to use "addons-701527" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 23:48:19 addons-701527 crio[1028]: time="2024-12-09 23:48:19.899141210Z" level=info msg="Removed pod sandbox: b137a9a1a7d62f434bb153f7a9819f038fa5cfe3500375b2d7b9e642eb5e20fc" id=1e5e95cd-9410-4ba2-9079-4df0370ccbf4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:48:19 addons-701527 crio[1028]: time="2024-12-09 23:48:19.899646991Z" level=info msg="Stopping pod sandbox: 194730041e36ba59e616653f98b03dcd28fef99051032dd36ad64bbe4c71bc38" id=17204a91-c0d9-4902-bdb2-0c64c1268487 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:48:19 addons-701527 crio[1028]: time="2024-12-09 23:48:19.899689755Z" level=info msg="Stopped pod sandbox (already stopped): 194730041e36ba59e616653f98b03dcd28fef99051032dd36ad64bbe4c71bc38" id=17204a91-c0d9-4902-bdb2-0c64c1268487 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:48:19 addons-701527 crio[1028]: time="2024-12-09 23:48:19.899961697Z" level=info msg="Removing pod sandbox: 194730041e36ba59e616653f98b03dcd28fef99051032dd36ad64bbe4c71bc38" id=05e4b59f-13ff-4987-8d7c-625981f3fc0f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:48:19 addons-701527 crio[1028]: time="2024-12-09 23:48:19.906838125Z" level=info msg="Removed pod sandbox: 194730041e36ba59e616653f98b03dcd28fef99051032dd36ad64bbe4c71bc38" id=05e4b59f-13ff-4987-8d7c-625981f3fc0f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:48:19 addons-701527 crio[1028]: time="2024-12-09 23:48:19.907208620Z" level=info msg="Stopping pod sandbox: 77fd67e4309bb4e3cf692fe7a7320469a20868b5f421886b851826a234e8d9ec" id=0820f87d-3344-4b72-93fb-a610c8f4d3e3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:48:19 addons-701527 crio[1028]: time="2024-12-09 23:48:19.907244195Z" level=info msg="Stopped pod sandbox (already stopped): 77fd67e4309bb4e3cf692fe7a7320469a20868b5f421886b851826a234e8d9ec" id=0820f87d-3344-4b72-93fb-a610c8f4d3e3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:48:19 addons-701527 crio[1028]: time="2024-12-09 23:48:19.907493702Z" level=info msg="Removing pod sandbox: 77fd67e4309bb4e3cf692fe7a7320469a20868b5f421886b851826a234e8d9ec" id=551304ca-fd05-4cb6-beaa-a3b40323c0a4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:48:19 addons-701527 crio[1028]: time="2024-12-09 23:48:19.913567679Z" level=info msg="Removed pod sandbox: 77fd67e4309bb4e3cf692fe7a7320469a20868b5f421886b851826a234e8d9ec" id=551304ca-fd05-4cb6-beaa-a3b40323c0a4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:48:19 addons-701527 crio[1028]: time="2024-12-09 23:48:19.913928989Z" level=info msg="Stopping pod sandbox: a14f542990302ad09720e8091345bb556b47d12cd6b5250511f70d6f8003f055" id=b4803990-1cc7-436c-9df8-f3176cba8a6a name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:48:19 addons-701527 crio[1028]: time="2024-12-09 23:48:19.913953432Z" level=info msg="Stopped pod sandbox (already stopped): a14f542990302ad09720e8091345bb556b47d12cd6b5250511f70d6f8003f055" id=b4803990-1cc7-436c-9df8-f3176cba8a6a name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:48:19 addons-701527 crio[1028]: time="2024-12-09 23:48:19.914235009Z" level=info msg="Removing pod sandbox: a14f542990302ad09720e8091345bb556b47d12cd6b5250511f70d6f8003f055" id=5d638cb9-2383-46f3-9369-8012c3b0e218 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:48:19 addons-701527 crio[1028]: time="2024-12-09 23:48:19.920261579Z" level=info msg="Removed pod sandbox: a14f542990302ad09720e8091345bb556b47d12cd6b5250511f70d6f8003f055" id=5d638cb9-2383-46f3-9369-8012c3b0e218 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:49:38 addons-701527 crio[1028]: time="2024-12-09 23:49:38.266739883Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-tqms7/POD" id=bc7d043b-79bc-4b57-90cd-9ac133d60217 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 23:49:38 addons-701527 crio[1028]: time="2024-12-09 23:49:38.266818333Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 09 23:49:38 addons-701527 crio[1028]: time="2024-12-09 23:49:38.297922184Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-tqms7 Namespace:default ID:babd6ccc73c92aa0fe3a4d0dbcbe00943124351df2c5ecb030951eba2e652e71 UID:76e4b97e-a8bc-46cf-9ad0-9aa50696d58d NetNS:/var/run/netns/4964bbe5-2d68-4d3f-95d2-f49ef25a991c Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 09 23:49:38 addons-701527 crio[1028]: time="2024-12-09 23:49:38.297965232Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-tqms7 to CNI network \"kindnet\" (type=ptp)"
	Dec 09 23:49:38 addons-701527 crio[1028]: time="2024-12-09 23:49:38.312599334Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-tqms7 Namespace:default ID:babd6ccc73c92aa0fe3a4d0dbcbe00943124351df2c5ecb030951eba2e652e71 UID:76e4b97e-a8bc-46cf-9ad0-9aa50696d58d NetNS:/var/run/netns/4964bbe5-2d68-4d3f-95d2-f49ef25a991c Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 09 23:49:38 addons-701527 crio[1028]: time="2024-12-09 23:49:38.312753910Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-tqms7 for CNI network kindnet (type=ptp)"
	Dec 09 23:49:38 addons-701527 crio[1028]: time="2024-12-09 23:49:38.315282869Z" level=info msg="Ran pod sandbox babd6ccc73c92aa0fe3a4d0dbcbe00943124351df2c5ecb030951eba2e652e71 with infra container: default/hello-world-app-55bf9c44b4-tqms7/POD" id=bc7d043b-79bc-4b57-90cd-9ac133d60217 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 09 23:49:38 addons-701527 crio[1028]: time="2024-12-09 23:49:38.316411639Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=6b066af9-4790-4d73-86ea-bdfeb55f0966 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 23:49:38 addons-701527 crio[1028]: time="2024-12-09 23:49:38.316611486Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=6b066af9-4790-4d73-86ea-bdfeb55f0966 name=/runtime.v1.ImageService/ImageStatus
	Dec 09 23:49:38 addons-701527 crio[1028]: time="2024-12-09 23:49:38.317123819Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=4de70682-6703-4f0f-8e8d-3a0368f8dc9c name=/runtime.v1.ImageService/PullImage
	Dec 09 23:49:38 addons-701527 crio[1028]: time="2024-12-09 23:49:38.321307376Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 09 23:49:38 addons-701527 crio[1028]: time="2024-12-09 23:49:38.795134583Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10645a732c52f       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago       Running             nginx                     0                   0856bdf086c6e       nginx
	929f01b55a429       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   714101868ecf3       busybox
	eb7e1db683204       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             4 minutes ago       Running             controller                0                   c934de94e4956       ingress-nginx-controller-5f85ff4588-pbnrr
	f0e0fc3f45764       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              patch                     0                   92926b116445c       ingress-nginx-admission-patch-gghlb
	36f54d74fafe6       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   2a3fd7578af2b       kube-ingress-dns-minikube
	6ab22756eac7d       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   1d734532e8663       metrics-server-84c5f94fbc-5g27r
	d7afc8b4ed276       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   235a47d127e9d       ingress-nginx-admission-create-hnqc5
	077b85dec6016       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   c7fdeb3ef79c5       coredns-7c65d6cfc9-cxp92
	c3dbc738cd674       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   51167656b2653       storage-provisioner
	4147aac866b08       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3                           5 minutes ago       Running             kindnet-cni               0                   9d6850a00e670       kindnet-stv96
	82e3678abb6c9       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             5 minutes ago       Running             kube-proxy                0                   aa81dd7c70085       kube-proxy-qh6vp
	25f645634229e       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             5 minutes ago       Running             kube-controller-manager   0                   bf9719be3d23e       kube-controller-manager-addons-701527
	2618b912b82c1       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             5 minutes ago       Running             kube-apiserver            0                   de652e399d7b7       kube-apiserver-addons-701527
	ae395d5ec54d0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   88e1369f9daff       etcd-addons-701527
	ce5eeb0ab7db8       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             5 minutes ago       Running             kube-scheduler            0                   103590d89ff0a       kube-scheduler-addons-701527
	
	
	==> coredns [077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce] <==
	[INFO] 10.244.0.14:40084 - 5420 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00006s
	[INFO] 10.244.0.14:35166 - 22470 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005346936s
	[INFO] 10.244.0.14:35166 - 22704 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005674683s
	[INFO] 10.244.0.14:36042 - 8638 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005873191s
	[INFO] 10.244.0.14:36042 - 9000 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006185268s
	[INFO] 10.244.0.14:60566 - 54350 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.007010068s
	[INFO] 10.244.0.14:60566 - 54639 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00763795s
	[INFO] 10.244.0.14:37272 - 48576 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096142s
	[INFO] 10.244.0.14:37272 - 48826 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000201678s
	[INFO] 10.244.0.22:49372 - 62930 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000222451s
	[INFO] 10.244.0.22:42253 - 3792 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000300334s
	[INFO] 10.244.0.22:48311 - 32336 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110815s
	[INFO] 10.244.0.22:58044 - 16832 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000141483s
	[INFO] 10.244.0.22:58727 - 65158 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101335s
	[INFO] 10.244.0.22:34114 - 45725 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091947s
	[INFO] 10.244.0.22:37380 - 44447 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007474503s
	[INFO] 10.244.0.22:52017 - 50482 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007668607s
	[INFO] 10.244.0.22:39589 - 24686 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008529996s
	[INFO] 10.244.0.22:59526 - 56787 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008607472s
	[INFO] 10.244.0.22:35968 - 37934 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007070902s
	[INFO] 10.244.0.22:35633 - 11788 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007789725s
	[INFO] 10.244.0.22:54126 - 55765 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000868298s
	[INFO] 10.244.0.22:55309 - 43646 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000919796s
	[INFO] 10.244.0.25:35789 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000256541s
	[INFO] 10.244.0.25:59971 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000162167s
	
	
	==> describe nodes <==
	Name:               addons-701527
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-701527
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=addons-701527
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T23_44_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-701527
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 23:44:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-701527
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 23:49:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 23:47:23 +0000   Mon, 09 Dec 2024 23:44:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 23:47:23 +0000   Mon, 09 Dec 2024 23:44:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 23:47:23 +0000   Mon, 09 Dec 2024 23:44:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 23:47:23 +0000   Mon, 09 Dec 2024 23:44:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-701527
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 775aa4cfeeac4fecbd38c1488c56dfa0
	  System UUID:                05d8adc9-27f0-43f6-9f5c-2780c35710f8
	  Boot ID:                    fcda772d-4207-4ab9-84d8-f9ba5cb81f2f
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m24s
	  default                     hello-world-app-55bf9c44b4-tqms7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-pbnrr    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         5m8s
	  kube-system                 coredns-7c65d6cfc9-cxp92                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m14s
	  kube-system                 etcd-addons-701527                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m20s
	  kube-system                 kindnet-stv96                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m14s
	  kube-system                 kube-apiserver-addons-701527                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-controller-manager-addons-701527        200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-proxy-qh6vp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-scheduler-addons-701527                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 metrics-server-84c5f94fbc-5g27r              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         5m9s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m9s   kube-proxy       
	  Normal   Starting                 5m20s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m20s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m20s  kubelet          Node addons-701527 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m20s  kubelet          Node addons-701527 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m20s  kubelet          Node addons-701527 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m15s  node-controller  Node addons-701527 event: Registered Node addons-701527 in Controller
	  Normal   NodeReady                4m55s  kubelet          Node addons-701527 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000758] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.005178] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001365] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.645483] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025447] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.034285] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.032948] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +7.141697] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 9 23:47] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[  +1.015721] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[  +2.011802] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[  +4.127509] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[  +8.191113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[ +16.130221] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[Dec 9 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	
	
	==> etcd [ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e] <==
	{"level":"info","ts":"2024-12-09T23:44:28.188269Z","caller":"traceutil/trace.go:171","msg":"trace[325315929] transaction","detail":"{read_only:false; number_of_response:1; response_revision:387; }","duration":"203.880945ms","start":"2024-12-09T23:44:27.984378Z","end":"2024-12-09T23:44:28.188259Z","steps":["trace[325315929] 'process raft request'  (duration: 201.436967ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:28.303994Z","caller":"traceutil/trace.go:171","msg":"trace[1532357115] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"119.952721ms","start":"2024-12-09T23:44:28.184000Z","end":"2024-12-09T23:44:28.303952Z","steps":["trace[1532357115] 'process raft request'  (duration: 105.19725ms)","trace[1532357115] 'compare'  (duration: 12.783026ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:44:28.304566Z","caller":"traceutil/trace.go:171","msg":"trace[393444608] linearizableReadLoop","detail":"{readStateIndex:399; appliedIndex:398; }","duration":"116.642703ms","start":"2024-12-09T23:44:28.187909Z","end":"2024-12-09T23:44:28.304551Z","steps":["trace[393444608] 'read index received'  (duration: 102.146105ms)","trace[393444608] 'applied index is now lower than readState.Index'  (duration: 14.481527ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:44:28.304741Z","caller":"traceutil/trace.go:171","msg":"trace[1793983851] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"115.980584ms","start":"2024-12-09T23:44:28.188750Z","end":"2024-12-09T23:44:28.304731Z","steps":["trace[1793983851] 'process raft request'  (duration: 113.368616ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:28.306758Z","caller":"traceutil/trace.go:171","msg":"trace[1446547101] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"117.685631ms","start":"2024-12-09T23:44:28.189051Z","end":"2024-12-09T23:44:28.306737Z","steps":["trace[1446547101] 'process raft request'  (duration: 113.110432ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:28.306955Z","caller":"traceutil/trace.go:171","msg":"trace[897928018] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"112.895918ms","start":"2024-12-09T23:44:28.194051Z","end":"2024-12-09T23:44:28.306947Z","steps":["trace[897928018] 'process raft request'  (duration: 108.142976ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:28.307052Z","caller":"traceutil/trace.go:171","msg":"trace[329491598] transaction","detail":"{read_only:false; number_of_response:1; response_revision:393; }","duration":"112.966783ms","start":"2024-12-09T23:44:28.194079Z","end":"2024-12-09T23:44:28.307046Z","steps":["trace[329491598] 'process raft request'  (duration: 109.094567ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:44:28.383748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.792811ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4096"}
	{"level":"info","ts":"2024-12-09T23:44:28.383885Z","caller":"traceutil/trace.go:171","msg":"trace[1601394470] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:394; }","duration":"195.962881ms","start":"2024-12-09T23:44:28.187904Z","end":"2024-12-09T23:44:28.383867Z","steps":["trace[1601394470] 'agreement among raft nodes before linearized reading'  (duration: 119.576811ms)","trace[1601394470] 'range keys from bolt db'  (duration: 76.184625ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:44:28.401149Z","caller":"traceutil/trace.go:171","msg":"trace[190457932] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"103.731437ms","start":"2024-12-09T23:44:28.297399Z","end":"2024-12-09T23:44:28.401131Z","steps":["trace[190457932] 'process raft request'  (duration: 103.465483ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:44:28.401354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.523072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-701527\" ","response":"range_response_count:1 size:5655"}
	{"level":"info","ts":"2024-12-09T23:44:28.401460Z","caller":"traceutil/trace.go:171","msg":"trace[862798791] range","detail":"{range_begin:/registry/minions/addons-701527; range_end:; response_count:1; response_revision:395; }","duration":"212.639122ms","start":"2024-12-09T23:44:28.188810Z","end":"2024-12-09T23:44:28.401449Z","steps":["trace[862798791] 'agreement among raft nodes before linearized reading'  (duration: 212.493838ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:44:28.600604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.504839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-09T23:44:28.600686Z","caller":"traceutil/trace.go:171","msg":"trace[181894478] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:402; }","duration":"107.601345ms","start":"2024-12-09T23:44:28.493054Z","end":"2024-12-09T23:44:28.600655Z","steps":["trace[181894478] 'agreement among raft nodes before linearized reading'  (duration: 107.423959ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:44:28.600975Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.951275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-701527\" ","response":"range_response_count:1 size:5655"}
	{"level":"info","ts":"2024-12-09T23:44:28.601109Z","caller":"traceutil/trace.go:171","msg":"trace[244838227] range","detail":"{range_begin:/registry/minions/addons-701527; range_end:; response_count:1; response_revision:402; }","duration":"108.059838ms","start":"2024-12-09T23:44:28.493012Z","end":"2024-12-09T23:44:28.601072Z","steps":["trace[244838227] 'agreement among raft nodes before linearized reading'  (duration: 107.923039ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:44:28.601462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.33806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:44:28.601552Z","caller":"traceutil/trace.go:171","msg":"trace[55582856] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io; range_end:; response_count:0; response_revision:402; }","duration":"108.428175ms","start":"2024-12-09T23:44:28.493113Z","end":"2024-12-09T23:44:28.601541Z","steps":["trace[55582856] 'agreement among raft nodes before linearized reading'  (duration: 108.303748ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:29.000571Z","caller":"traceutil/trace.go:171","msg":"trace[149294942] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"100.040439ms","start":"2024-12-09T23:44:28.900518Z","end":"2024-12-09T23:44:29.000559Z","steps":["trace[149294942] 'process raft request'  (duration: 87.770465ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:29.099884Z","caller":"traceutil/trace.go:171","msg":"trace[537937198] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"106.241349ms","start":"2024-12-09T23:44:28.993626Z","end":"2024-12-09T23:44:29.099867Z","steps":["trace[537937198] 'process raft request'  (duration: 91.607451ms)","trace[537937198] 'compare'  (duration: 14.381518ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T23:44:29.100414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.225693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-701527\" ","response":"range_response_count:1 size:5655"}
	{"level":"info","ts":"2024-12-09T23:44:29.100709Z","caller":"traceutil/trace.go:171","msg":"trace[1047754935] range","detail":"{range_begin:/registry/minions/addons-701527; range_end:; response_count:1; response_revision:417; }","duration":"100.512235ms","start":"2024-12-09T23:44:29.000170Z","end":"2024-12-09T23:44:29.100682Z","steps":["trace[1047754935] 'agreement among raft nodes before linearized reading'  (duration: 100.0103ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:29.101046Z","caller":"traceutil/trace.go:171","msg":"trace[1793953] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"100.789141ms","start":"2024-12-09T23:44:29.000245Z","end":"2024-12-09T23:44:29.101034Z","steps":["trace[1793953] 'process raft request'  (duration: 100.511212ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:29.102030Z","caller":"traceutil/trace.go:171","msg":"trace[136168466] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"101.490397ms","start":"2024-12-09T23:44:29.000529Z","end":"2024-12-09T23:44:29.102019Z","steps":["trace[136168466] 'process raft request'  (duration: 100.315559ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:45:49.656961Z","caller":"traceutil/trace.go:171","msg":"trace[1818210935] transaction","detail":"{read_only:false; response_revision:1211; number_of_response:1; }","duration":"106.060224ms","start":"2024-12-09T23:45:49.550884Z","end":"2024-12-09T23:45:49.656944Z","steps":["trace[1818210935] 'process raft request'  (duration: 105.954366ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:49:39 up 32 min,  0 users,  load average: 0.30, 0.65, 0.37
	Linux addons-701527 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697] <==
	I1209 23:47:33.784478       1 main.go:301] handling current node
	I1209 23:47:43.785578       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:47:43.785620       1 main.go:301] handling current node
	I1209 23:47:53.793677       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:47:53.793713       1 main.go:301] handling current node
	I1209 23:48:03.784482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:48:03.784524       1 main.go:301] handling current node
	I1209 23:48:13.784673       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:48:13.784725       1 main.go:301] handling current node
	I1209 23:48:23.788533       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:48:23.788573       1 main.go:301] handling current node
	I1209 23:48:33.784814       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:48:33.784849       1 main.go:301] handling current node
	I1209 23:48:43.790611       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:48:43.790646       1 main.go:301] handling current node
	I1209 23:48:53.787869       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:48:53.787907       1 main.go:301] handling current node
	I1209 23:49:03.794000       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:49:03.794033       1 main.go:301] handling current node
	I1209 23:49:13.791592       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:49:13.791626       1 main.go:301] handling current node
	I1209 23:49:23.784779       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:49:23.784817       1 main.go:301] handling current node
	I1209 23:49:33.784686       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:49:33.784729       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b] <==
	E1209 23:46:04.409438       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.68.101:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.68.101:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.68.101:443: connect: connection refused" logger="UnhandledError"
	I1209 23:46:04.440705       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1209 23:46:22.656162       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58740: use of closed network connection
	E1209 23:46:22.815646       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58772: use of closed network connection
	I1209 23:46:31.754419       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.47.197"}
	I1209 23:46:37.493599       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1209 23:46:38.609504       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1209 23:47:02.124474       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1209 23:47:14.982082       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1209 23:47:18.162100       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 23:47:18.324563       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.4.42"}
	I1209 23:47:21.753833       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:21.753877       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:21.802327       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:21.802524       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:21.888483       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:21.888532       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:21.889403       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:21.889507       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:21.909524       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:21.909567       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1209 23:47:22.888525       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1209 23:47:22.909765       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1209 23:47:23.006378       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1209 23:49:38.100259       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.75.201"}
	
	
	==> kube-controller-manager [25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b] <==
	E1209 23:47:59.039205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:47:59.785768       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:47:59.785814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:48:05.992414       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:48:05.992454       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:48:21.823785       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:48:21.823830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:48:36.148167       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:48:36.148202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:48:40.370605       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:48:40.370640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:48:49.471125       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:48:49.471168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:49:12.708505       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:49:12.708544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:49:15.805260       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:49:15.805299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:49:23.458575       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:49:23.458628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:49:29.507002       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:49:29.507043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1209 23:49:37.965216       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.744202ms"
	I1209 23:49:37.969048       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="3.789367ms"
	I1209 23:49:37.969127       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.642µs"
	I1209 23:49:37.974471       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.089µs"
	
	
	==> kube-proxy [82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b] <==
	I1209 23:44:27.802596       1 server_linux.go:66] "Using iptables proxy"
	I1209 23:44:29.108824       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1209 23:44:29.108882       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 23:44:29.684759       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 23:44:29.684881       1 server_linux.go:169] "Using iptables Proxier"
	I1209 23:44:29.699616       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 23:44:29.700499       1 server.go:483] "Version info" version="v1.31.2"
	I1209 23:44:29.700597       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:44:29.704576       1 config.go:105] "Starting endpoint slice config controller"
	I1209 23:44:29.704686       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 23:44:29.704767       1 config.go:199] "Starting service config controller"
	I1209 23:44:29.704800       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 23:44:29.704988       1 config.go:328] "Starting node config controller"
	I1209 23:44:29.705074       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 23:44:29.805457       1 shared_informer.go:320] Caches are synced for service config
	I1209 23:44:29.805650       1 shared_informer.go:320] Caches are synced for node config
	I1209 23:44:29.805703       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b] <==
	W1209 23:44:17.595236       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1209 23:44:17.595330       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1209 23:44:17.595347       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 23:44:17.595358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.595235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 23:44:17.595379       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E1209 23:44:17.595395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.595398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1209 23:44:17.595354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1209 23:44:17.595428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.595431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1209 23:44:17.595479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 23:44:17.595525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1209 23:44:17.595483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.595591       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 23:44:17.595622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:18.410848       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1209 23:44:18.410886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:18.419558       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 23:44:18.419592       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 23:44:18.624088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 23:44:18.624127       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:18.642537       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 23:44:18.642575       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1209 23:44:20.092161       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 23:49:37 addons-701527 kubelet[1635]: E1209 23:49:37.964500    1635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42" containerName="csi-snapshotter"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: E1209 23:49:37.964510    1635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d6203375-ea1a-419a-966b-5e73e6464e19" containerName="volume-snapshot-controller"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: E1209 23:49:37.964522    1635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c5f7ea3e-3d8e-411c-9255-b93d551a5b0d" containerName="local-path-provisioner"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: E1209 23:49:37.964533    1635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42" containerName="csi-external-health-monitor-controller"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: E1209 23:49:37.964545    1635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0cbc53f7-0944-4f38-b866-c24619921bf4" containerName="volume-snapshot-controller"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: E1209 23:49:37.964554    1635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b0e2443-64c0-4547-be9d-da1d058bf73d" containerName="csi-attacher"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: E1209 23:49:37.964564    1635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42" containerName="liveness-probe"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: E1209 23:49:37.964574    1635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42" containerName="hostpath"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: E1209 23:49:37.964583    1635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1ef76ab-fe48-45b9-9fad-5a0f5a2bf984" containerName="task-pv-container"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: E1209 23:49:37.964598    1635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42" containerName="csi-provisioner"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: I1209 23:49:37.964658    1635 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a75e84-083b-4ef7-97db-f519225c8067" containerName="csi-resizer"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: I1209 23:49:37.964669    1635 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42" containerName="hostpath"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: I1209 23:49:37.964678    1635 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42" containerName="csi-snapshotter"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: I1209 23:49:37.964689    1635 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42" containerName="node-driver-registrar"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: I1209 23:49:37.964698    1635 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42" containerName="csi-provisioner"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: I1209 23:49:37.964707    1635 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0e2443-64c0-4547-be9d-da1d058bf73d" containerName="csi-attacher"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: I1209 23:49:37.964716    1635 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42" containerName="csi-external-health-monitor-controller"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: I1209 23:49:37.964726    1635 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cbc53f7-0944-4f38-b866-c24619921bf4" containerName="volume-snapshot-controller"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: I1209 23:49:37.964735    1635 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42" containerName="liveness-probe"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: I1209 23:49:37.964744    1635 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6203375-ea1a-419a-966b-5e73e6464e19" containerName="volume-snapshot-controller"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: I1209 23:49:37.964752    1635 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f7ea3e-3d8e-411c-9255-b93d551a5b0d" containerName="local-path-provisioner"
	Dec 09 23:49:37 addons-701527 kubelet[1635]: I1209 23:49:37.964763    1635 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ef76ab-fe48-45b9-9fad-5a0f5a2bf984" containerName="task-pv-container"
	Dec 09 23:49:38 addons-701527 kubelet[1635]: I1209 23:49:38.108804    1635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xggr\" (UniqueName: \"kubernetes.io/projected/76e4b97e-a8bc-46cf-9ad0-9aa50696d58d-kube-api-access-6xggr\") pod \"hello-world-app-55bf9c44b4-tqms7\" (UID: \"76e4b97e-a8bc-46cf-9ad0-9aa50696d58d\") " pod="default/hello-world-app-55bf9c44b4-tqms7"
	Dec 09 23:49:39 addons-701527 kubelet[1635]: E1209 23:49:39.843238    1635 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788179843041994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:49:39 addons-701527 kubelet[1635]: E1209 23:49:39.843276    1635 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788179843041994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c3dbc738cd6745fb4f46c65bb5828e5e2bb14cd4b9623bc4688e129fedfe42e5] <==
	I1209 23:44:44.991990       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:44:44.999539       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:44:44.999596       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 23:44:45.008366       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 23:44:45.008511       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b90eb7c8-2ae9-4bf6-89db-add0f773b69f", APIVersion:"v1", ResourceVersion:"890", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-701527_47b9cfa5-c4a3-4d5d-adbb-82aacbcfae88 became leader
	I1209 23:44:45.008555       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-701527_47b9cfa5-c4a3-4d5d-adbb-82aacbcfae88!
	I1209 23:44:45.109376       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-701527_47b9cfa5-c4a3-4d5d-adbb-82aacbcfae88!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-701527 -n addons-701527
helpers_test.go:261: (dbg) Run:  kubectl --context addons-701527 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-hnqc5 ingress-nginx-admission-patch-gghlb
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-701527 describe pod ingress-nginx-admission-create-hnqc5 ingress-nginx-admission-patch-gghlb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-701527 describe pod ingress-nginx-admission-create-hnqc5 ingress-nginx-admission-patch-gghlb: exit status 1 (57.920192ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hnqc5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gghlb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-701527 describe pod ingress-nginx-admission-create-hnqc5 ingress-nginx-admission-patch-gghlb: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-701527 addons disable ingress-dns --alsologtostderr -v=1: (1.561381135s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-701527 addons disable ingress --alsologtostderr -v=1: (7.618058079s)
--- FAIL: TestAddons/parallel/Ingress (151.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (366.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.775988ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-5g27r" [9401b572-a33f-4211-a676-d07847671042] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00284908s
addons_test.go:402: (dbg) Run:  kubectl --context addons-701527 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-701527 top pods -n kube-system: exit status 1 (68.049482ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cxp92, age: 2m11.137656415s

                                                
                                                
** /stderr **
I1209 23:46:36.140121   15396 retry.go:31] will retry after 4.068666629s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-701527 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-701527 top pods -n kube-system: exit status 1 (64.516812ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cxp92, age: 2m15.271701658s

                                                
                                                
** /stderr **
I1209 23:46:40.273960   15396 retry.go:31] will retry after 2.855234956s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-701527 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-701527 top pods -n kube-system: exit status 1 (69.018326ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cxp92, age: 2m18.197288429s

                                                
                                                
** /stderr **
I1209 23:46:43.199317   15396 retry.go:31] will retry after 5.925062836s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-701527 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-701527 top pods -n kube-system: exit status 1 (63.296972ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-d2s7j, age: 2m5.185486711s

                                                
                                                
** /stderr **
I1209 23:46:49.188040   15396 retry.go:31] will retry after 7.961861422s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-701527 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-701527 top pods -n kube-system: exit status 1 (62.100444ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cxp92, age: 2m32.21027175s

                                                
                                                
** /stderr **
I1209 23:46:57.212277   15396 retry.go:31] will retry after 16.310353222s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-701527 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-701527 top pods -n kube-system: exit status 1 (63.295161ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cxp92, age: 2m48.584840915s

                                                
                                                
** /stderr **
I1209 23:47:13.586827   15396 retry.go:31] will retry after 11.768797119s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-701527 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-701527 top pods -n kube-system: exit status 1 (61.159468ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cxp92, age: 3m0.415813042s

                                                
                                                
** /stderr **
I1209 23:47:25.417989   15396 retry.go:31] will retry after 32.44465496s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-701527 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-701527 top pods -n kube-system: exit status 1 (62.43268ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cxp92, age: 3m32.924087898s

                                                
                                                
** /stderr **
I1209 23:47:57.926220   15396 retry.go:31] will retry after 1m8.352793681s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-701527 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-701527 top pods -n kube-system: exit status 1 (61.656262ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cxp92, age: 4m41.339301527s

                                                
                                                
** /stderr **
I1209 23:49:06.341220   15396 retry.go:31] will retry after 1m8.5834931s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-701527 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-701527 top pods -n kube-system: exit status 1 (63.307092ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cxp92, age: 5m49.992991484s

                                                
                                                
** /stderr **
I1209 23:50:14.995102   15396 retry.go:31] will retry after 1m28.583738133s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-701527 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-701527 top pods -n kube-system: exit status 1 (63.140197ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cxp92, age: 7m18.640505638s

                                                
                                                
** /stderr **
I1209 23:51:43.642833   15396 retry.go:31] will retry after 51.544053172s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-701527 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-701527 top pods -n kube-system: exit status 1 (62.674497ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-cxp92, age: 8m10.254554345s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-701527
helpers_test.go:235: (dbg) docker inspect addons-701527:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "845a2978a2b56775d174983f7156b47f75cbd517d111b717fa683318c531e629",
	        "Created": "2024-12-09T23:44:06.447018212Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 17464,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-09T23:44:06.580880962Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/845a2978a2b56775d174983f7156b47f75cbd517d111b717fa683318c531e629/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/845a2978a2b56775d174983f7156b47f75cbd517d111b717fa683318c531e629/hostname",
	        "HostsPath": "/var/lib/docker/containers/845a2978a2b56775d174983f7156b47f75cbd517d111b717fa683318c531e629/hosts",
	        "LogPath": "/var/lib/docker/containers/845a2978a2b56775d174983f7156b47f75cbd517d111b717fa683318c531e629/845a2978a2b56775d174983f7156b47f75cbd517d111b717fa683318c531e629-json.log",
	        "Name": "/addons-701527",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-701527:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-701527",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/feeefb46f860108fd8c940e50fcada0bdf5c7b66d681bb2fa312b75ddd6ae3e8-init/diff:/var/lib/docker/overlay2/ab6cf1b3d2a8cc4179735a54668a5a4ec060988eb25398d5edaaa8c4eb9fdd94/diff",
	                "MergedDir": "/var/lib/docker/overlay2/feeefb46f860108fd8c940e50fcada0bdf5c7b66d681bb2fa312b75ddd6ae3e8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/feeefb46f860108fd8c940e50fcada0bdf5c7b66d681bb2fa312b75ddd6ae3e8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/feeefb46f860108fd8c940e50fcada0bdf5c7b66d681bb2fa312b75ddd6ae3e8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-701527",
	                "Source": "/var/lib/docker/volumes/addons-701527/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-701527",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-701527",
	                "name.minikube.sigs.k8s.io": "addons-701527",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6fdb3567838e25a0e40a29c05350222b1cd03ced5a0f9bbbc3dc4c2a2f27bdcf",
	            "SandboxKey": "/var/run/docker/netns/6fdb3567838e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-701527": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f7d7d2fb753c6c47c2c2a4eaa0e0c3f27dba879f8e03828a5e109b66b1f60920",
	                    "EndpointID": "3313ac333154670091e4647b944cdb1464dd19e65f492f5325eb1e688e2b98b8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-701527",
	                        "845a2978a2b5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-701527 -n addons-701527
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-701527 logs -n 25: (1.121004471s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-926270                                                                   | download-docker-926270 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-759052   | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | binary-mirror-759052                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37197                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-759052                                                                     | binary-mirror-759052   | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| addons  | enable dashboard -p                                                                         | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | addons-701527                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | addons-701527                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-701527 --wait=true                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:46 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | -p addons-701527                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-701527 addons                                                                        | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-701527 ip                                                                            | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-701527 ssh cat                                                                       | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | /opt/local-path-provisioner/pvc-d348a07d-27a8-404f-adfc-4e8b72e76d0a_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:47 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:47 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-701527 addons                                                                        | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-701527 addons                                                                        | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-701527 addons                                                                        | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-701527 addons                                                                        | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-701527 ssh curl -s                                                                   | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-701527 ip                                                                            | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:49 UTC | 09 Dec 24 23:49 UTC |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:49 UTC | 09 Dec 24 23:49 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-701527 addons disable                                                                | addons-701527          | jenkins | v1.34.0 | 09 Dec 24 23:49 UTC | 09 Dec 24 23:49 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:43:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:43:42.291350   16706 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:43:42.291956   16706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:42.292008   16706 out.go:358] Setting ErrFile to fd 2...
	I1209 23:43:42.292026   16706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:42.292469   16706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
	I1209 23:43:42.293591   16706 out.go:352] Setting JSON to false
	I1209 23:43:42.294440   16706 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1569,"bootTime":1733786253,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:43:42.294553   16706 start.go:139] virtualization: kvm guest
	I1209 23:43:42.296682   16706 out.go:177] * [addons-701527] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:43:42.298147   16706 notify.go:220] Checking for updates...
	I1209 23:43:42.298178   16706 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 23:43:42.299801   16706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:43:42.301367   16706 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig
	I1209 23:43:42.302924   16706 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube
	I1209 23:43:42.304469   16706 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:43:42.305871   16706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:43:42.307386   16706 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:43:42.331603   16706 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:43:42.331700   16706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:43:42.377573   16706 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-09 23:43:42.368637796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:43:42.377679   16706 docker.go:318] overlay module found
	I1209 23:43:42.379787   16706 out.go:177] * Using the docker driver based on user configuration
	I1209 23:43:42.381419   16706 start.go:297] selected driver: docker
	I1209 23:43:42.381437   16706 start.go:901] validating driver "docker" against <nil>
	I1209 23:43:42.381450   16706 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:43:42.382222   16706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:43:42.429393   16706 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-09 23:43:42.42124561 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:43:42.429563   16706 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:43:42.429799   16706 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:43:42.431796   16706 out.go:177] * Using Docker driver with root privileges
	I1209 23:43:42.433203   16706 cni.go:84] Creating CNI manager for ""
	I1209 23:43:42.433274   16706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:43:42.433300   16706 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 23:43:42.433381   16706 start.go:340] cluster config:
	{Name:addons-701527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-701527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:43:42.434804   16706 out.go:177] * Starting "addons-701527" primary control-plane node in "addons-701527" cluster
	I1209 23:43:42.436125   16706 cache.go:121] Beginning downloading kic base image for docker with crio
	I1209 23:43:42.437550   16706 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1209 23:43:42.438812   16706 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:43:42.438837   16706 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 23:43:42.438849   16706 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-8617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 23:43:42.438856   16706 cache.go:56] Caching tarball of preloaded images
	I1209 23:43:42.438923   16706 preload.go:172] Found /home/jenkins/minikube-integration/20062-8617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:43:42.438935   16706 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 23:43:42.439279   16706 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/config.json ...
	I1209 23:43:42.439309   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/config.json: {Name:mkddb15bcf662292992308fcda9e5afee384d781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:43:42.454544   16706 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 23:43:42.454662   16706 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1209 23:43:42.454679   16706 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1209 23:43:42.454683   16706 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1209 23:43:42.454690   16706 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1209 23:43:42.454698   16706 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from local cache
	I1209 23:43:54.336857   16706 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from cached tarball
	I1209 23:43:54.336900   16706 cache.go:194] Successfully downloaded all kic artifacts
	I1209 23:43:54.336954   16706 start.go:360] acquireMachinesLock for addons-701527: {Name:mk1a37956add636236f0f7623a5fab0561619f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:43:54.337068   16706 start.go:364] duration metric: took 88.265µs to acquireMachinesLock for "addons-701527"
	I1209 23:43:54.337096   16706 start.go:93] Provisioning new machine with config: &{Name:addons-701527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-701527 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:43:54.337198   16706 start.go:125] createHost starting for "" (driver="docker")
	I1209 23:43:54.339145   16706 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1209 23:43:54.339398   16706 start.go:159] libmachine.API.Create for "addons-701527" (driver="docker")
	I1209 23:43:54.339428   16706 client.go:168] LocalClient.Create starting
	I1209 23:43:54.339520   16706 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca.pem
	I1209 23:43:54.467011   16706 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/cert.pem
	I1209 23:43:54.667107   16706 cli_runner.go:164] Run: docker network inspect addons-701527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1209 23:43:54.683385   16706 cli_runner.go:211] docker network inspect addons-701527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1209 23:43:54.683462   16706 network_create.go:284] running [docker network inspect addons-701527] to gather additional debugging logs...
	I1209 23:43:54.683484   16706 cli_runner.go:164] Run: docker network inspect addons-701527
	W1209 23:43:54.699365   16706 cli_runner.go:211] docker network inspect addons-701527 returned with exit code 1
	I1209 23:43:54.699392   16706 network_create.go:287] error running [docker network inspect addons-701527]: docker network inspect addons-701527: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-701527 not found
	I1209 23:43:54.699403   16706 network_create.go:289] output of [docker network inspect addons-701527]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-701527 not found
	
	** /stderr **
	I1209 23:43:54.699516   16706 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 23:43:54.715467   16706 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c255f0}
	I1209 23:43:54.715522   16706 network_create.go:124] attempt to create docker network addons-701527 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1209 23:43:54.715581   16706 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-701527 addons-701527
	I1209 23:43:54.775951   16706 network_create.go:108] docker network addons-701527 192.168.49.0/24 created
	I1209 23:43:54.775984   16706 kic.go:121] calculated static IP "192.168.49.2" for the "addons-701527" container
	I1209 23:43:54.776060   16706 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1209 23:43:54.791352   16706 cli_runner.go:164] Run: docker volume create addons-701527 --label name.minikube.sigs.k8s.io=addons-701527 --label created_by.minikube.sigs.k8s.io=true
	I1209 23:43:54.807968   16706 oci.go:103] Successfully created a docker volume addons-701527
	I1209 23:43:54.808051   16706 cli_runner.go:164] Run: docker run --rm --name addons-701527-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-701527 --entrypoint /usr/bin/test -v addons-701527:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1209 23:44:01.918760   16706 cli_runner.go:217] Completed: docker run --rm --name addons-701527-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-701527 --entrypoint /usr/bin/test -v addons-701527:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib: (7.110650813s)
	I1209 23:44:01.918791   16706 oci.go:107] Successfully prepared a docker volume addons-701527
	I1209 23:44:01.918821   16706 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:44:01.918850   16706 kic.go:194] Starting extracting preloaded images to volume ...
	I1209 23:44:01.918933   16706 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20062-8617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-701527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1209 23:44:06.385319   16706 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20062-8617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-701527:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.466339119s)
	I1209 23:44:06.385349   16706 kic.go:203] duration metric: took 4.46649867s to extract preloaded images to volume ...
	W1209 23:44:06.385464   16706 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1209 23:44:06.385549   16706 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1209 23:44:06.431904   16706 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-701527 --name addons-701527 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-701527 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-701527 --network addons-701527 --ip 192.168.49.2 --volume addons-701527:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1209 23:44:06.756203   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Running}}
	I1209 23:44:06.774385   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:06.792768   16706 cli_runner.go:164] Run: docker exec addons-701527 stat /var/lib/dpkg/alternatives/iptables
	I1209 23:44:06.833331   16706 oci.go:144] the created container "addons-701527" has a running status.
	I1209 23:44:06.833363   16706 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa...
	I1209 23:44:07.065248   16706 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1209 23:44:07.089698   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:07.117248   16706 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1209 23:44:07.117277   16706 kic_runner.go:114] Args: [docker exec --privileged addons-701527 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1209 23:44:07.195128   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:07.220606   16706 machine.go:93] provisionDockerMachine start ...
	I1209 23:44:07.220697   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:07.241994   16706 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:07.242270   16706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 23:44:07.242288   16706 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 23:44:07.398775   16706 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-701527
	
	I1209 23:44:07.398799   16706 ubuntu.go:169] provisioning hostname "addons-701527"
	I1209 23:44:07.398843   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:07.416433   16706 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:07.416638   16706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 23:44:07.416662   16706 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-701527 && echo "addons-701527" | sudo tee /etc/hostname
	I1209 23:44:07.553918   16706 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-701527
	
	I1209 23:44:07.554010   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:07.570357   16706 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:07.570601   16706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 23:44:07.570623   16706 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-701527' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-701527/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-701527' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:44:07.695404   16706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:44:07.695437   16706 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20062-8617/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-8617/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-8617/.minikube}
	I1209 23:44:07.695463   16706 ubuntu.go:177] setting up certificates
	I1209 23:44:07.695472   16706 provision.go:84] configureAuth start
	I1209 23:44:07.695536   16706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-701527
	I1209 23:44:07.712526   16706 provision.go:143] copyHostCerts
	I1209 23:44:07.712591   16706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-8617/.minikube/ca.pem (1078 bytes)
	I1209 23:44:07.712697   16706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-8617/.minikube/cert.pem (1123 bytes)
	I1209 23:44:07.712756   16706 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-8617/.minikube/key.pem (1675 bytes)
	I1209 23:44:07.712812   16706 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-8617/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca-key.pem org=jenkins.addons-701527 san=[127.0.0.1 192.168.49.2 addons-701527 localhost minikube]
	I1209 23:44:07.802490   16706 provision.go:177] copyRemoteCerts
	I1209 23:44:07.802546   16706 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:44:07.802585   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:07.819267   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:07.911951   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 23:44:07.934493   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 23:44:07.956309   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:44:07.977414   16706 provision.go:87] duration metric: took 281.931064ms to configureAuth
	I1209 23:44:07.977443   16706 ubuntu.go:193] setting minikube options for container-runtime
	I1209 23:44:07.977599   16706 config.go:182] Loaded profile config "addons-701527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:44:07.977692   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:07.994768   16706 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:07.994958   16706 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1209 23:44:07.994983   16706 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:44:08.206091   16706 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:44:08.206126   16706 machine.go:96] duration metric: took 985.497185ms to provisionDockerMachine
	I1209 23:44:08.206142   16706 client.go:171] duration metric: took 13.866707081s to LocalClient.Create
	I1209 23:44:08.206160   16706 start.go:167] duration metric: took 13.866761679s to libmachine.API.Create "addons-701527"
	I1209 23:44:08.206171   16706 start.go:293] postStartSetup for "addons-701527" (driver="docker")
	I1209 23:44:08.206191   16706 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:44:08.206267   16706 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:44:08.206320   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:08.223150   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:08.316251   16706 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:44:08.319094   16706 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1209 23:44:08.319161   16706 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1209 23:44:08.319184   16706 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1209 23:44:08.319196   16706 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1209 23:44:08.319213   16706 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-8617/.minikube/addons for local assets ...
	I1209 23:44:08.319282   16706 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-8617/.minikube/files for local assets ...
	I1209 23:44:08.319315   16706 start.go:296] duration metric: took 113.130951ms for postStartSetup
	I1209 23:44:08.319642   16706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-701527
	I1209 23:44:08.336055   16706 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/config.json ...
	I1209 23:44:08.336288   16706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1209 23:44:08.336329   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:08.353745   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:08.440184   16706 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1209 23:44:08.444201   16706 start.go:128] duration metric: took 14.106986411s to createHost
	I1209 23:44:08.444225   16706 start.go:83] releasing machines lock for "addons-701527", held for 14.10714519s
	I1209 23:44:08.444293   16706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-701527
	I1209 23:44:08.461280   16706 ssh_runner.go:195] Run: cat /version.json
	I1209 23:44:08.461334   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:08.461384   16706 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:44:08.461439   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:08.478183   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:08.478399   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:08.642670   16706 ssh_runner.go:195] Run: systemctl --version
	I1209 23:44:08.646692   16706 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:44:08.783694   16706 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1209 23:44:08.787908   16706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:44:08.805730   16706 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1209 23:44:08.805805   16706 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:44:08.831361   16706 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1209 23:44:08.831387   16706 start.go:495] detecting cgroup driver to use...
	I1209 23:44:08.831421   16706 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1209 23:44:08.831457   16706 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:44:08.844432   16706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:44:08.855060   16706 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:44:08.855122   16706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:44:08.867546   16706 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:44:08.880740   16706 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:44:08.956437   16706 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:44:09.036536   16706 docker.go:233] disabling docker service ...
	I1209 23:44:09.036590   16706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:44:09.053701   16706 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:44:09.065051   16706 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:44:09.140516   16706 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:44:09.216805   16706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:44:09.226910   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:44:09.240762   16706 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:44:09.240813   16706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:09.249462   16706 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:44:09.249528   16706 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:09.258292   16706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:09.267398   16706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:09.276327   16706 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:44:09.284771   16706 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:09.293582   16706 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:09.307349   16706 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:09.315961   16706 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:44:09.323071   16706 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:44:09.323117   16706 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:44:09.335342   16706 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:44:09.342933   16706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:44:09.415449   16706 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:44:09.524846   16706 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:44:09.524919   16706 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:44:09.528108   16706 start.go:563] Will wait 60s for crictl version
	I1209 23:44:09.528157   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:44:09.531039   16706 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:44:09.561465   16706 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1209 23:44:09.561562   16706 ssh_runner.go:195] Run: crio --version
	I1209 23:44:09.594802   16706 ssh_runner.go:195] Run: crio --version
	I1209 23:44:09.627234   16706 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1209 23:44:09.628420   16706 cli_runner.go:164] Run: docker network inspect addons-701527 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1209 23:44:09.644446   16706 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1209 23:44:09.648107   16706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:44:09.658605   16706 kubeadm.go:883] updating cluster {Name:addons-701527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-701527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:44:09.658726   16706 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:44:09.658768   16706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:44:09.723815   16706 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:44:09.723839   16706 crio.go:433] Images already preloaded, skipping extraction
	I1209 23:44:09.723878   16706 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:44:09.754490   16706 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:44:09.754512   16706 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:44:09.754519   16706 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1209 23:44:09.754599   16706 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-701527 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-701527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:44:09.754661   16706 ssh_runner.go:195] Run: crio config
	I1209 23:44:09.795070   16706 cni.go:84] Creating CNI manager for ""
	I1209 23:44:09.795095   16706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:44:09.795104   16706 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:44:09.795125   16706 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-701527 NodeName:addons-701527 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:44:09.795254   16706 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-701527"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:44:09.795311   16706 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:44:09.803323   16706 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:44:09.803383   16706 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:44:09.811006   16706 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1209 23:44:09.826423   16706 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:44:09.841975   16706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1209 23:44:09.857568   16706 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1209 23:44:09.860622   16706 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:44:09.870028   16706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:44:09.939730   16706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:44:09.951905   16706 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527 for IP: 192.168.49.2
	I1209 23:44:09.951929   16706 certs.go:194] generating shared ca certs ...
	I1209 23:44:09.951949   16706 certs.go:226] acquiring lock for ca certs: {Name:mk82a507a4733e86b5bb8ab9261ee4fbeee6dad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.952077   16706 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-8617/.minikube/ca.key
	I1209 23:44:10.098168   16706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-8617/.minikube/ca.crt ...
	I1209 23:44:10.098198   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/ca.crt: {Name:mke990eb271b135ccbc977c996229a252283baa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.098361   16706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-8617/.minikube/ca.key ...
	I1209 23:44:10.098372   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/ca.key: {Name:mk2b7bd59246893c57dc576601c7811abd9e7298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.098444   16706 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-8617/.minikube/proxy-client-ca.key
	I1209 23:44:10.299597   16706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-8617/.minikube/proxy-client-ca.crt ...
	I1209 23:44:10.299629   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/proxy-client-ca.crt: {Name:mk840e279a932868b17a95fa509aae91d8562222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.299821   16706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-8617/.minikube/proxy-client-ca.key ...
	I1209 23:44:10.299835   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/proxy-client-ca.key: {Name:mkc1c6b221c0aba3ef91583fc143126cb64e403f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.299913   16706 certs.go:256] generating profile certs ...
	I1209 23:44:10.299966   16706 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.key
	I1209 23:44:10.299980   16706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt with IP's: []
	I1209 23:44:10.373492   16706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt ...
	I1209 23:44:10.373524   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: {Name:mkbf84e05e635237af5578f3a666a71ae1df54ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.373692   16706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.key ...
	I1209 23:44:10.373704   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.key: {Name:mk449bd5cb74f087931c45dc3ca19e3248feb0e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.373771   16706 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.key.c3458f7b
	I1209 23:44:10.373789   16706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.crt.c3458f7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1209 23:44:10.542382   16706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.crt.c3458f7b ...
	I1209 23:44:10.542412   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.crt.c3458f7b: {Name:mk626b1fdb96268d49d7851ab3383da13099eb4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.542576   16706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.key.c3458f7b ...
	I1209 23:44:10.542589   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.key.c3458f7b: {Name:mkdd58f89195a45e477c101614f0b69c1c04a23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.542661   16706 certs.go:381] copying /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.crt.c3458f7b -> /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.crt
	I1209 23:44:10.542731   16706 certs.go:385] copying /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.key.c3458f7b -> /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.key
	I1209 23:44:10.542776   16706 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.key
	I1209 23:44:10.542792   16706 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.crt with IP's: []
	I1209 23:44:10.687571   16706 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.crt ...
	I1209 23:44:10.687597   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.crt: {Name:mkddbc6b6debbd6d4d4a91713847e0fa81cfb165 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.687735   16706 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.key ...
	I1209 23:44:10.687748   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.key: {Name:mkf5097941462bfd427f62a91226f3f67a6de3d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:10.687898   16706 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 23:44:10.687929   16706 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/ca.pem (1078 bytes)
	I1209 23:44:10.687954   16706 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:44:10.687978   16706 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-8617/.minikube/certs/key.pem (1675 bytes)
	I1209 23:44:10.688663   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:44:10.712437   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 23:44:10.733714   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:44:10.754989   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 23:44:10.776534   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 23:44:10.796685   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 23:44:10.817486   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:44:10.837489   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 23:44:10.857830   16706 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-8617/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:44:10.878560   16706 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:44:10.893645   16706 ssh_runner.go:195] Run: openssl version
	I1209 23:44:10.898576   16706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:44:10.907087   16706 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:44:10.910619   16706 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:44:10.910665   16706 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:44:10.916997   16706 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:44:10.925412   16706 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:44:10.928356   16706 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 23:44:10.928400   16706 kubeadm.go:392] StartCluster: {Name:addons-701527 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-701527 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:44:10.928483   16706 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:44:10.928537   16706 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:44:10.960284   16706 cri.go:89] found id: ""
	I1209 23:44:10.960343   16706 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:44:10.968161   16706 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:44:10.976336   16706 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1209 23:44:10.976399   16706 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:44:10.983822   16706 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:44:10.983839   16706 kubeadm.go:157] found existing configuration files:
	
	I1209 23:44:10.983884   16706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:44:10.991209   16706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:44:10.991283   16706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:44:10.998501   16706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:44:11.005865   16706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:44:11.005914   16706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:44:11.013124   16706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:44:11.020638   16706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:44:11.020685   16706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:44:11.027697   16706 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:44:11.034905   16706 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:44:11.034955   16706 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:44:11.042069   16706 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1209 23:44:11.076185   16706 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 23:44:11.076255   16706 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 23:44:11.091672   16706 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1209 23:44:11.091735   16706 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1209 23:44:11.091763   16706 kubeadm.go:310] OS: Linux
	I1209 23:44:11.091810   16706 kubeadm.go:310] CGROUPS_CPU: enabled
	I1209 23:44:11.091882   16706 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1209 23:44:11.091935   16706 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1209 23:44:11.091993   16706 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1209 23:44:11.092038   16706 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1209 23:44:11.092113   16706 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1209 23:44:11.092157   16706 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1209 23:44:11.092199   16706 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1209 23:44:11.092253   16706 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1209 23:44:11.138467   16706 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 23:44:11.138611   16706 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 23:44:11.138760   16706 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 23:44:11.145223   16706 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 23:44:11.147980   16706 out.go:235]   - Generating certificates and keys ...
	I1209 23:44:11.148085   16706 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 23:44:11.148151   16706 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 23:44:11.358536   16706 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 23:44:11.443796   16706 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 23:44:11.690035   16706 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 23:44:11.953136   16706 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 23:44:12.261494   16706 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 23:44:12.261627   16706 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-701527 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 23:44:12.434374   16706 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 23:44:12.434519   16706 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-701527 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1209 23:44:12.543438   16706 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 23:44:12.817631   16706 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 23:44:12.942571   16706 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 23:44:12.942682   16706 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 23:44:13.043413   16706 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 23:44:13.416601   16706 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 23:44:13.469795   16706 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 23:44:13.647070   16706 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 23:44:13.852980   16706 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 23:44:13.853506   16706 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 23:44:13.856997   16706 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 23:44:13.859361   16706 out.go:235]   - Booting up control plane ...
	I1209 23:44:13.859460   16706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 23:44:13.859575   16706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 23:44:13.860155   16706 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 23:44:13.868977   16706 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 23:44:13.874402   16706 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 23:44:13.874466   16706 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 23:44:13.953671   16706 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 23:44:13.953832   16706 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 23:44:14.955083   16706 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001498803s
	I1209 23:44:14.955178   16706 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 23:44:18.956956   16706 kubeadm.go:310] [api-check] The API server is healthy after 4.00192303s
	I1209 23:44:18.968081   16706 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 23:44:18.978294   16706 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 23:44:18.996373   16706 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 23:44:18.996648   16706 kubeadm.go:310] [mark-control-plane] Marking the node addons-701527 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 23:44:19.004440   16706 kubeadm.go:310] [bootstrap-token] Using token: o9d4gk.ol9z315ujhqpyjtd
	I1209 23:44:19.006152   16706 out.go:235]   - Configuring RBAC rules ...
	I1209 23:44:19.006278   16706 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 23:44:19.011401   16706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 23:44:19.016658   16706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 23:44:19.018951   16706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 23:44:19.021357   16706 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 23:44:19.023596   16706 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 23:44:19.364054   16706 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 23:44:19.779522   16706 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 23:44:20.362886   16706 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 23:44:20.363691   16706 kubeadm.go:310] 
	I1209 23:44:20.363780   16706 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 23:44:20.363790   16706 kubeadm.go:310] 
	I1209 23:44:20.363886   16706 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 23:44:20.363897   16706 kubeadm.go:310] 
	I1209 23:44:20.363973   16706 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 23:44:20.364112   16706 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 23:44:20.364183   16706 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 23:44:20.364193   16706 kubeadm.go:310] 
	I1209 23:44:20.364267   16706 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 23:44:20.364276   16706 kubeadm.go:310] 
	I1209 23:44:20.364339   16706 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 23:44:20.364348   16706 kubeadm.go:310] 
	I1209 23:44:20.364444   16706 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 23:44:20.364574   16706 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 23:44:20.364672   16706 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 23:44:20.364685   16706 kubeadm.go:310] 
	I1209 23:44:20.364797   16706 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 23:44:20.364908   16706 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 23:44:20.364922   16706 kubeadm.go:310] 
	I1209 23:44:20.365053   16706 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o9d4gk.ol9z315ujhqpyjtd \
	I1209 23:44:20.365183   16706 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d276d577512ade74a5109f58b5778ce04abe39c8c67256076dac49c0e0be586a \
	I1209 23:44:20.365203   16706 kubeadm.go:310] 	--control-plane 
	I1209 23:44:20.365209   16706 kubeadm.go:310] 
	I1209 23:44:20.365279   16706 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 23:44:20.365288   16706 kubeadm.go:310] 
	I1209 23:44:20.365354   16706 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o9d4gk.ol9z315ujhqpyjtd \
	I1209 23:44:20.365444   16706 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d276d577512ade74a5109f58b5778ce04abe39c8c67256076dac49c0e0be586a 
	I1209 23:44:20.366742   16706 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1209 23:44:20.366841   16706 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 23:44:20.366865   16706 cni.go:84] Creating CNI manager for ""
	I1209 23:44:20.366871   16706 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:44:20.368931   16706 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1209 23:44:20.370244   16706 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1209 23:44:20.373878   16706 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1209 23:44:20.373902   16706 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1209 23:44:20.390148   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1209 23:44:20.580225   16706 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 23:44:20.580385   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:20.580408   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-701527 minikube.k8s.io/updated_at=2024_12_09T23_44_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=addons-701527 minikube.k8s.io/primary=true
	I1209 23:44:20.587158   16706 ops.go:34] apiserver oom_adj: -16
	I1209 23:44:20.647776   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:21.148758   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:21.647990   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:22.148710   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:22.648746   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:23.148720   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:23.647984   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:24.148143   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:24.648070   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:25.148745   16706 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:25.247786   16706 kubeadm.go:1113] duration metric: took 4.667456543s to wait for elevateKubeSystemPrivileges
	I1209 23:44:25.247829   16706 kubeadm.go:394] duration metric: took 14.319432421s to StartCluster
	I1209 23:44:25.247853   16706 settings.go:142] acquiring lock: {Name:mk3fbdb3180100a5b99ca4ec9ec726523f75f361 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:25.247979   16706 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-8617/kubeconfig
	I1209 23:44:25.248459   16706 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/kubeconfig: {Name:mk0b7b47a4c3647122bd54439d50dda394f7edf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:25.248680   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 23:44:25.248708   16706 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:44:25.248772   16706 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 23:44:25.248917   16706 addons.go:69] Setting yakd=true in profile "addons-701527"
	I1209 23:44:25.248925   16706 addons.go:69] Setting ingress-dns=true in profile "addons-701527"
	I1209 23:44:25.248940   16706 addons.go:234] Setting addon yakd=true in "addons-701527"
	I1209 23:44:25.248944   16706 addons.go:234] Setting addon ingress-dns=true in "addons-701527"
	I1209 23:44:25.248943   16706 config.go:182] Loaded profile config "addons-701527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:44:25.248942   16706 addons.go:69] Setting registry=true in profile "addons-701527"
	I1209 23:44:25.248962   16706 addons.go:234] Setting addon registry=true in "addons-701527"
	I1209 23:44:25.248972   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.248982   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.248967   16706 addons.go:69] Setting metrics-server=true in profile "addons-701527"
	I1209 23:44:25.249023   16706 addons.go:234] Setting addon metrics-server=true in "addons-701527"
	I1209 23:44:25.249036   16706 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-701527"
	I1209 23:44:25.249052   16706 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-701527"
	I1209 23:44:25.249066   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.249074   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.249079   16706 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-701527"
	I1209 23:44:25.249101   16706 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-701527"
	I1209 23:44:25.249406   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.249530   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.249539   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.249550   16706 addons.go:69] Setting volcano=true in profile "addons-701527"
	I1209 23:44:25.249564   16706 addons.go:234] Setting addon volcano=true in "addons-701527"
	I1209 23:44:25.249568   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.249566   16706 addons.go:69] Setting volumesnapshots=true in profile "addons-701527"
	I1209 23:44:25.249583   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.249595   16706 addons.go:234] Setting addon volumesnapshots=true in "addons-701527"
	I1209 23:44:25.249623   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.249629   16706 addons.go:69] Setting storage-provisioner=true in profile "addons-701527"
	I1209 23:44:25.249647   16706 addons.go:234] Setting addon storage-provisioner=true in "addons-701527"
	I1209 23:44:25.249706   16706 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-701527"
	I1209 23:44:25.249748   16706 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-701527"
	I1209 23:44:25.249781   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.250037   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.250063   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.250267   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.250429   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.251229   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.249539   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.251685   16706 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-701527"
	I1209 23:44:25.251704   16706 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-701527"
	I1209 23:44:25.251733   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.252195   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.253278   16706 addons.go:69] Setting inspektor-gadget=true in profile "addons-701527"
	I1209 23:44:25.253341   16706 addons.go:234] Setting addon inspektor-gadget=true in "addons-701527"
	I1209 23:44:25.253388   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.248996   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.255656   16706 addons.go:69] Setting cloud-spanner=true in profile "addons-701527"
	I1209 23:44:25.255723   16706 addons.go:234] Setting addon cloud-spanner=true in "addons-701527"
	I1209 23:44:25.255770   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.256121   16706 addons.go:69] Setting default-storageclass=true in profile "addons-701527"
	I1209 23:44:25.256155   16706 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-701527"
	I1209 23:44:25.256491   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.256733   16706 out.go:177] * Verifying Kubernetes components...
	I1209 23:44:25.256932   16706 addons.go:69] Setting ingress=true in profile "addons-701527"
	I1209 23:44:25.256961   16706 addons.go:234] Setting addon ingress=true in "addons-701527"
	I1209 23:44:25.257005   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.257017   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.257237   16706 addons.go:69] Setting gcp-auth=true in profile "addons-701527"
	I1209 23:44:25.257262   16706 mustload.go:65] Loading cluster: addons-701527
	I1209 23:44:25.259275   16706 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:44:25.279845   16706 config.go:182] Loaded profile config "addons-701527": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:44:25.279984   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.280173   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.280735   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.281123   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.300350   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 23:44:25.300349   16706 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 23:44:25.301935   16706 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 23:44:25.301966   16706 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 23:44:25.303909   16706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 23:44:25.303930   16706 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 23:44:25.303992   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.304204   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	W1209 23:44:25.308278   16706 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 23:44:25.317261   16706 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-701527"
	I1209 23:44:25.317326   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.317758   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.341552   16706 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1209 23:44:25.341724   16706 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:44:25.343412   16706 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 23:44:25.343443   16706 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 23:44:25.343519   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.343910   16706 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:44:25.343927   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:44:25.343976   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.351848   16706 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1209 23:44:25.352010   16706 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 23:44:25.353108   16706 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1209 23:44:25.353129   16706 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1209 23:44:25.353195   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.353592   16706 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 23:44:25.353609   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 23:44:25.353660   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.354164   16706 addons.go:234] Setting addon default-storageclass=true in "addons-701527"
	I1209 23:44:25.354208   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.354648   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:25.356133   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.358165   16706 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1209 23:44:25.358234   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 23:44:25.358285   16706 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:44:25.360658   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.360793   16706 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 23:44:25.360810   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 23:44:25.360863   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.361860   16706 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1209 23:44:25.363199   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 23:44:25.363352   16706 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1209 23:44:25.363368   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 23:44:25.363416   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.365955   16706 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1209 23:44:25.366018   16706 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1209 23:44:25.367132   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 23:44:25.368461   16706 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 23:44:25.368480   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1209 23:44:25.368533   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.368720   16706 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:44:25.370280   16706 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 23:44:25.370298   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 23:44:25.370345   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.370557   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 23:44:25.372722   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 23:44:25.374031   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 23:44:25.375330   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 23:44:25.376695   16706 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 23:44:25.377993   16706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 23:44:25.378013   16706 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 23:44:25.378038   16706 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1209 23:44:25.378078   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.381729   16706 out.go:177]   - Using image docker.io/registry:2.8.3
	I1209 23:44:25.384600   16706 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 23:44:25.384619   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 23:44:25.384682   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.387191   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:25.389354   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.399963   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.409817   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.419799   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.423352   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.424435   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.424818   16706 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 23:44:25.427566   16706 out.go:177]   - Using image docker.io/busybox:stable
	I1209 23:44:25.427963   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.429485   16706 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 23:44:25.429504   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 23:44:25.429556   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.431663   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.437563   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.445936   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.446296   16706 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:44:25.446311   16706 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:44:25.446349   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:25.452620   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.463271   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:25.699889   16706 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:44:25.699978   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 23:44:25.704287   16706 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 23:44:25.704318   16706 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 23:44:25.710611   16706 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 23:44:25.710632   16706 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 23:44:25.797291   16706 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 23:44:25.797321   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1209 23:44:25.808292   16706 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 23:44:25.808344   16706 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 23:44:25.890307   16706 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 23:44:25.890352   16706 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 23:44:25.892144   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 23:44:25.899942   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 23:44:25.988335   16706 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 23:44:25.988430   16706 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 23:44:25.989261   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 23:44:25.999726   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:44:26.002492   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 23:44:26.092005   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 23:44:26.095886   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 23:44:26.101953   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:44:26.103720   16706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 23:44:26.103747   16706 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 23:44:26.189579   16706 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 23:44:26.189661   16706 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 23:44:26.193203   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 23:44:26.206605   16706 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 23:44:26.206632   16706 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 23:44:26.291032   16706 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 23:44:26.291123   16706 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 23:44:26.393546   16706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 23:44:26.393627   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 23:44:26.492758   16706 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 23:44:26.492785   16706 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 23:44:26.585623   16706 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 23:44:26.585707   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 23:44:26.595586   16706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 23:44:26.595616   16706 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 23:44:26.803673   16706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 23:44:26.803769   16706 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 23:44:26.886204   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 23:44:26.892853   16706 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 23:44:26.892942   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 23:44:27.085506   16706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 23:44:27.085589   16706 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 23:44:27.095484   16706 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:44:27.095656   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 23:44:27.191048   16706 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:44:27.191078   16706 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 23:44:27.286245   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 23:44:27.386109   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:44:27.486739   16706 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.786717562s)
	I1209 23:44:27.486781   16706 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1209 23:44:27.488143   16706 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.788219446s)
	I1209 23:44:27.489010   16706 node_ready.go:35] waiting up to 6m0s for node "addons-701527" to be "Ready" ...
	I1209 23:44:27.489209   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.597037926s)
	I1209 23:44:27.489258   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.589227728s)
	I1209 23:44:27.492691   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:44:27.497378   16706 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 23:44:27.497409   16706 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 23:44:27.885194   16706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 23:44:27.885290   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 23:44:28.400451   16706 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-701527" context rescaled to 1 replicas
	I1209 23:44:28.490965   16706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 23:44:28.491054   16706 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 23:44:28.599139   16706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 23:44:28.599218   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 23:44:29.085139   16706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 23:44:29.085226   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 23:44:29.302372   16706 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 23:44:29.302400   16706 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 23:44:29.493438   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 23:44:29.500410   16706 node_ready.go:53] node "addons-701527" has status "Ready":"False"
	I1209 23:44:30.088243   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.098894921s)
	I1209 23:44:30.088318   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.0885675s)
	I1209 23:44:30.088422   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.08589924s)
	I1209 23:44:30.088456   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.996427243s)
	I1209 23:44:31.600721   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.504791899s)
	I1209 23:44:31.600756   16706 addons.go:475] Verifying addon ingress=true in "addons-701527"
	I1209 23:44:31.600803   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.407521491s)
	I1209 23:44:31.600753   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.498729452s)
	I1209 23:44:31.600906   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.71462024s)
	I1209 23:44:31.600932   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.314659518s)
	I1209 23:44:31.601253   16706 addons.go:475] Verifying addon registry=true in "addons-701527"
	I1209 23:44:31.603344   16706 out.go:177] * Verifying ingress addon...
	I1209 23:44:31.603407   16706 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-701527 service yakd-dashboard -n yakd-dashboard
	
	I1209 23:44:31.603355   16706 out.go:177] * Verifying registry addon...
	I1209 23:44:31.606220   16706 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 23:44:31.606220   16706 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 23:44:31.612203   16706 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 23:44:31.612230   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:31.612820   16706 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 23:44:31.612874   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:31.992303   16706 node_ready.go:53] node "addons-701527" has status "Ready":"False"
	I1209 23:44:32.113758   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:32.114382   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:32.389631   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.003472062s)
	W1209 23:44:32.389721   16706 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 23:44:32.389727   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.897000638s)
	I1209 23:44:32.389759   16706 addons.go:475] Verifying addon metrics-server=true in "addons-701527"
	I1209 23:44:32.389770   16706 retry.go:31] will retry after 347.537129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 23:44:32.595972   16706 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 23:44:32.596059   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:32.610657   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:32.611320   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:32.617483   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:32.737886   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:44:32.803886   16706 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 23:44:32.886563   16706 addons.go:234] Setting addon gcp-auth=true in "addons-701527"
	I1209 23:44:32.886660   16706 host.go:66] Checking if "addons-701527" exists ...
	I1209 23:44:32.887184   16706 cli_runner.go:164] Run: docker container inspect addons-701527 --format={{.State.Status}}
	I1209 23:44:32.913504   16706 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 23:44:32.913571   16706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-701527
	I1209 23:44:32.936796   16706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/addons-701527/id_rsa Username:docker}
	I1209 23:44:33.111862   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:33.112513   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:33.218154   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.724661985s)
	I1209 23:44:33.218193   16706 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-701527"
	I1209 23:44:33.219771   16706 out.go:177] * Verifying csi-hostpath-driver addon...
	I1209 23:44:33.222152   16706 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 23:44:33.286957   16706 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 23:44:33.286990   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:33.609820   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:33.610492   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:33.725583   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:34.109696   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:34.110302   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:34.225622   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:34.492032   16706 node_ready.go:53] node "addons-701527" has status "Ready":"False"
	I1209 23:44:34.609390   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:34.609950   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:34.725647   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:35.109179   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:35.109569   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:35.225896   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:35.578139   16706 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.840211211s)
	I1209 23:44:35.578229   16706 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.664696486s)
	I1209 23:44:35.580291   16706 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:44:35.581878   16706 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 23:44:35.583280   16706 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 23:44:35.583298   16706 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 23:44:35.600484   16706 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 23:44:35.600506   16706 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 23:44:35.610201   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:35.610608   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:35.617560   16706 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 23:44:35.617581   16706 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 23:44:35.634418   16706 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 23:44:35.725192   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:35.946409   16706 addons.go:475] Verifying addon gcp-auth=true in "addons-701527"
	I1209 23:44:35.948671   16706 out.go:177] * Verifying gcp-auth addon...
	I1209 23:44:35.950763   16706 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 23:44:35.986469   16706 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 23:44:35.986496   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:36.109444   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:36.109945   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:36.225289   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:36.453980   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:36.492196   16706 node_ready.go:53] node "addons-701527" has status "Ready":"False"
	I1209 23:44:36.609998   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:36.610522   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:36.726274   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:36.953531   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:37.109254   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:37.109769   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:37.225962   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:37.456192   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:37.609552   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:37.609935   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:37.725494   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:37.954045   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:38.109642   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:38.110070   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:38.226034   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:38.453885   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:38.492238   16706 node_ready.go:53] node "addons-701527" has status "Ready":"False"
	I1209 23:44:38.609839   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:38.610494   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:38.726122   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:38.953821   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:39.110268   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:39.110581   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:39.225684   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:39.454378   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:39.609477   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:39.609982   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:39.725253   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:39.953600   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:40.111198   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:40.111478   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:40.225140   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:40.453623   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:40.609347   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:40.609816   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:40.725411   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:40.953808   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:40.992323   16706 node_ready.go:53] node "addons-701527" has status "Ready":"False"
	I1209 23:44:41.109879   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:41.110412   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:41.225583   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:41.454113   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:41.609926   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:41.610482   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:41.725721   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:41.954108   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:42.108907   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:42.109441   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:42.225590   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:42.453432   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:42.609439   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:42.609833   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:42.725652   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:42.953911   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:43.109597   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:43.110045   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:43.225305   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:43.453760   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:43.492096   16706 node_ready.go:53] node "addons-701527" has status "Ready":"False"
	I1209 23:44:43.609626   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:43.610177   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:43.725393   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:43.953865   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:44.189886   16706 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 23:44:44.189913   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:44.190696   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:44.229151   16706 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 23:44:44.229175   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:44.486102   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:44.493006   16706 node_ready.go:49] node "addons-701527" has status "Ready":"True"
	I1209 23:44:44.493036   16706 node_ready.go:38] duration metric: took 17.003992506s for node "addons-701527" to be "Ready" ...
	I1209 23:44:44.493048   16706 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:44:44.502184   16706 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:44.611984   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:44.612161   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:44.791840   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:44.987270   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:45.109907   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:45.110172   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:45.226905   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:45.454770   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:45.610167   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:45.610491   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:45.726605   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:45.954279   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:46.110630   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:46.111325   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:46.226168   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:46.454490   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:46.508175   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:46.610378   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:46.610715   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:46.726743   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:46.953554   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:47.110778   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:47.111109   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:47.226240   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:47.453593   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:47.609985   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:47.610566   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:47.727362   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:47.986139   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:48.110808   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:48.111553   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:48.227340   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:48.484867   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:48.610678   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:48.611072   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:48.727752   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:48.954685   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:49.007794   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:49.110148   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:49.110513   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:49.226545   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:49.454157   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:49.610550   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:49.610572   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:49.726822   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:49.953679   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:50.110362   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:50.110802   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:50.226841   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:50.453595   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:50.610542   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:50.611195   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:50.727391   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:50.986826   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:51.110670   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:51.110859   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:51.229633   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:51.486197   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:51.508677   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:51.610136   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:51.610219   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:51.727135   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:51.955088   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:52.109501   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:52.110177   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:52.227701   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:52.454536   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:52.609603   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:52.609801   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:52.726852   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:52.954576   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:53.110453   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:53.110684   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:53.226900   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:53.455132   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:53.609662   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:53.610291   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:53.725808   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:53.954032   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:54.009017   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:54.110061   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:54.110327   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:54.225947   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:54.454097   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:54.610729   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:54.611181   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:54.727450   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:54.954124   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:55.109905   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:55.110285   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:55.226058   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:55.454202   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:55.610026   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:55.610402   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:55.726626   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:55.955833   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:56.110706   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:56.111206   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:56.226054   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:56.454614   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:56.507610   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:56.609652   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:56.610317   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:56.726352   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:56.954597   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:57.186969   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:57.188518   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:57.293787   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:57.453704   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:57.610681   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:57.611022   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:57.725858   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:57.954171   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:58.110356   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:58.110982   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:58.227295   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:58.454207   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:58.610566   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:58.610956   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:58.726755   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:58.954404   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:59.008515   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:59.116001   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:59.116281   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:59.289428   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:59.486964   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:59.610853   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:59.613181   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:59.725578   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:59.987214   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:00.110656   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:00.110777   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:00.227667   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:00.485331   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:00.610382   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:00.610922   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:00.727135   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:00.954131   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:01.008852   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:01.110336   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:01.110598   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:01.227392   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:01.454584   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:01.610044   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:01.610826   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:01.726704   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:01.953980   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:02.110740   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:02.111246   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:02.226708   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:02.454770   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:02.610315   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:02.610823   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:02.726939   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:02.954271   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:03.110264   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:03.110519   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:03.227481   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:03.454933   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:03.507645   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:03.609937   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:03.610172   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:03.728750   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:03.953698   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:04.110074   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:04.110207   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:04.226245   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:04.485986   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:04.611573   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:04.611892   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:04.725912   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:04.954232   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:05.110532   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:05.110873   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:05.226804   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:05.454234   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:05.508810   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:05.609972   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:05.610275   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:05.726651   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:05.954444   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:06.111136   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:06.111241   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:06.226210   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:06.454114   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:06.610821   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:06.610970   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:06.726009   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:06.954289   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:07.109933   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:07.110497   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:07.227485   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:07.454831   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:07.610355   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:07.610731   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:07.727265   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:07.954413   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:08.008555   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:08.110151   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:08.110463   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:08.227098   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:08.454024   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:08.611229   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:08.611593   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:08.726744   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:08.954888   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:09.109842   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:09.110202   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:09.226093   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:09.454929   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:09.610203   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:09.610424   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:09.788109   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:09.953915   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:10.109788   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:10.110782   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:10.227211   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:10.454239   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:10.507825   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:10.610493   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:10.610650   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:10.727228   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:10.954350   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:11.110257   16706 kapi.go:107] duration metric: took 39.504036002s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 23:45:11.110459   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:11.226714   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:11.454402   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:11.610823   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:11.726632   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:11.953881   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:12.111412   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:12.226478   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:12.453825   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:12.610441   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:12.729169   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:12.986298   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:13.008207   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:13.111784   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:13.227270   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:13.486901   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:13.686831   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:13.787008   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:13.989743   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:14.187209   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:14.304745   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:14.497017   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:14.687138   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:14.786629   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:14.986361   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:15.008765   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:15.110203   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:15.226213   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:15.485900   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:15.610528   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:15.727011   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:15.986405   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:16.110606   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:16.226680   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:16.486258   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:16.610101   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:16.726381   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:16.954763   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:17.110562   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:17.227011   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:17.454118   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:17.508070   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:17.610569   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:17.727021   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:17.985549   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:18.111059   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:18.226636   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:18.454779   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:18.611290   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:18.726100   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:18.954236   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:19.111339   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:19.225865   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:19.454475   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:19.508224   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:19.610888   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:19.727264   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:19.954340   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:20.111166   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:20.225865   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:20.454474   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:20.611018   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:20.726369   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:20.954345   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:21.110668   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:21.226824   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:21.502904   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:21.508491   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:21.610511   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:21.727429   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:21.954794   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:22.109541   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:22.226964   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:22.454511   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:22.609987   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:22.726099   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:22.985193   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:23.111150   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:23.227018   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:23.455765   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:23.609579   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:23.727338   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:23.953985   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:24.007709   16706 pod_ready.go:103] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:24.110005   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:24.226887   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:24.456358   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:24.508028   16706 pod_ready.go:93] pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:24.508051   16706 pod_ready.go:82] duration metric: took 40.005833308s for pod "amd-gpu-device-plugin-d2s7j" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.508061   16706 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cxp92" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.512157   16706 pod_ready.go:93] pod "coredns-7c65d6cfc9-cxp92" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:24.512175   16706 pod_ready.go:82] duration metric: took 4.108999ms for pod "coredns-7c65d6cfc9-cxp92" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.512196   16706 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.515921   16706 pod_ready.go:93] pod "etcd-addons-701527" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:24.515939   16706 pod_ready.go:82] duration metric: took 3.737734ms for pod "etcd-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.515951   16706 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.519779   16706 pod_ready.go:93] pod "kube-apiserver-addons-701527" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:24.519798   16706 pod_ready.go:82] duration metric: took 3.840651ms for pod "kube-apiserver-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.519806   16706 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.523297   16706 pod_ready.go:93] pod "kube-controller-manager-addons-701527" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:24.523313   16706 pod_ready.go:82] duration metric: took 3.501389ms for pod "kube-controller-manager-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.523323   16706 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qh6vp" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.612318   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:24.726480   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:24.906313   16706 pod_ready.go:93] pod "kube-proxy-qh6vp" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:24.906400   16706 pod_ready.go:82] duration metric: took 383.068779ms for pod "kube-proxy-qh6vp" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.906432   16706 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:24.993241   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:25.111726   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:25.288104   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:25.306784   16706 pod_ready.go:93] pod "kube-scheduler-addons-701527" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:25.306810   16706 pod_ready.go:82] duration metric: took 400.364544ms for pod "kube-scheduler-addons-701527" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:25.306823   16706 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:25.454216   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:25.610062   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:25.726332   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:25.953814   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:26.109731   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:26.226866   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:26.453797   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:26.610197   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:26.727762   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:26.954147   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:27.110568   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:27.227573   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:27.312826   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:27.454087   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:27.610114   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:27.726580   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:27.986307   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:28.187285   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:28.288662   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:28.486793   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:28.689712   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:28.791247   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:28.993410   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:29.190265   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:29.288413   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:29.389954   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:29.487174   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:29.610459   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:29.794460   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:29.986908   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:30.110135   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:30.287840   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:30.487998   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:30.611464   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:30.727430   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:30.954200   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:31.110513   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:31.227291   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:31.454444   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:31.610805   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:31.727626   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:31.813109   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:31.953660   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:32.110120   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:32.226314   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:32.454258   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:32.611347   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:32.726938   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:32.954369   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:33.110733   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:33.227241   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:33.454833   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:33.610768   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:33.790082   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:33.813471   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:33.954444   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:34.110946   16706 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:34.226884   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:34.486905   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:34.610112   16706 kapi.go:107] duration metric: took 1m3.003891856s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 23:45:34.726073   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:34.954035   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:35.226793   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:35.485801   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:35.727323   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:35.954765   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:36.227255   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:36.313068   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:36.455454   16706 kapi.go:107] duration metric: took 1m0.504686907s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 23:45:36.457409   16706 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-701527 cluster.
	I1209 23:45:36.458875   16706 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 23:45:36.460483   16706 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 23:45:36.727680   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:37.227455   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:37.726516   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:38.227443   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:38.726483   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:38.812762   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:39.227633   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:39.726656   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:40.226930   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:40.726744   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:40.813228   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:41.226121   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:41.726703   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:42.227062   16706 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:42.726384   16706 kapi.go:107] duration metric: took 1m9.504230323s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 23:45:42.728398   16706 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, inspektor-gadget, cloud-spanner, ingress-dns, default-storageclass, storage-provisioner, yakd, storage-provisioner-rancher, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1209 23:45:42.729932   16706 addons.go:510] duration metric: took 1m17.481160441s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin inspektor-gadget cloud-spanner ingress-dns default-storageclass storage-provisioner yakd storage-provisioner-rancher metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1209 23:45:43.312724   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:45.812817   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:47.812855   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:50.313364   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:52.812734   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:54.813143   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:57.314128   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:45:59.812298   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:46:01.812878   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:46:03.812913   16706 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"False"
	I1209 23:46:04.812574   16706 pod_ready.go:93] pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace has status "Ready":"True"
	I1209 23:46:04.812598   16706 pod_ready.go:82] duration metric: took 39.505766562s for pod "metrics-server-84c5f94fbc-5g27r" in "kube-system" namespace to be "Ready" ...
	I1209 23:46:04.812608   16706 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-55d28" in "kube-system" namespace to be "Ready" ...
	I1209 23:46:04.817020   16706 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-55d28" in "kube-system" namespace has status "Ready":"True"
	I1209 23:46:04.817041   16706 pod_ready.go:82] duration metric: took 4.42726ms for pod "nvidia-device-plugin-daemonset-55d28" in "kube-system" namespace to be "Ready" ...
	I1209 23:46:04.817059   16706 pod_ready.go:39] duration metric: took 1m20.32397868s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:46:04.817076   16706 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:46:04.817110   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:46:04.817161   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:46:04.852318   16706 cri.go:89] found id: "2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b"
	I1209 23:46:04.852349   16706 cri.go:89] found id: ""
	I1209 23:46:04.852356   16706 logs.go:282] 1 containers: [2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b]
	I1209 23:46:04.852410   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:04.855590   16706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 23:46:04.855653   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:46:04.887943   16706 cri.go:89] found id: "ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e"
	I1209 23:46:04.887969   16706 cri.go:89] found id: ""
	I1209 23:46:04.887978   16706 logs.go:282] 1 containers: [ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e]
	I1209 23:46:04.888022   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:04.891342   16706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 23:46:04.891417   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:46:04.923260   16706 cri.go:89] found id: "077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce"
	I1209 23:46:04.923285   16706 cri.go:89] found id: ""
	I1209 23:46:04.923293   16706 logs.go:282] 1 containers: [077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce]
	I1209 23:46:04.923352   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:04.926703   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:46:04.926762   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:46:04.960465   16706 cri.go:89] found id: "ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b"
	I1209 23:46:04.960487   16706 cri.go:89] found id: ""
	I1209 23:46:04.960495   16706 logs.go:282] 1 containers: [ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b]
	I1209 23:46:04.960541   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:04.963793   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:46:04.963849   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:46:04.996394   16706 cri.go:89] found id: "82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b"
	I1209 23:46:04.996415   16706 cri.go:89] found id: ""
	I1209 23:46:04.996422   16706 logs.go:282] 1 containers: [82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b]
	I1209 23:46:04.996473   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:04.999750   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:46:04.999800   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:46:05.032205   16706 cri.go:89] found id: "25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b"
	I1209 23:46:05.032243   16706 cri.go:89] found id: ""
	I1209 23:46:05.032252   16706 logs.go:282] 1 containers: [25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b]
	I1209 23:46:05.032311   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:05.035468   16706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 23:46:05.035552   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:46:05.068521   16706 cri.go:89] found id: "4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697"
	I1209 23:46:05.068542   16706 cri.go:89] found id: ""
	I1209 23:46:05.068550   16706 logs.go:282] 1 containers: [4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697]
	I1209 23:46:05.068594   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:05.071902   16706 logs.go:123] Gathering logs for CRI-O ...
	I1209 23:46:05.071928   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 23:46:05.147074   16706 logs.go:123] Gathering logs for container status ...
	I1209 23:46:05.147109   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 23:46:05.189939   16706 logs.go:123] Gathering logs for dmesg ...
	I1209 23:46:05.189969   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:46:05.202543   16706 logs.go:123] Gathering logs for coredns [077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce] ...
	I1209 23:46:05.202572   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce"
	I1209 23:46:05.236746   16706 logs.go:123] Gathering logs for kube-proxy [82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b] ...
	I1209 23:46:05.236775   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b"
	I1209 23:46:05.269131   16706 logs.go:123] Gathering logs for etcd [ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e] ...
	I1209 23:46:05.269159   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e"
	I1209 23:46:05.312923   16706 logs.go:123] Gathering logs for kube-scheduler [ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b] ...
	I1209 23:46:05.312961   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b"
	I1209 23:46:05.350880   16706 logs.go:123] Gathering logs for kube-controller-manager [25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b] ...
	I1209 23:46:05.350914   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b"
	I1209 23:46:05.409010   16706 logs.go:123] Gathering logs for kindnet [4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697] ...
	I1209 23:46:05.409049   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697"
	I1209 23:46:05.442588   16706 logs.go:123] Gathering logs for kubelet ...
	I1209 23:46:05.442614   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 23:46:05.522226   16706 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:46:05.522261   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 23:46:05.618086   16706 logs.go:123] Gathering logs for kube-apiserver [2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b] ...
	I1209 23:46:05.618113   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b"
	I1209 23:46:08.162821   16706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:46:08.177204   16706 api_server.go:72] duration metric: took 1m42.928457075s to wait for apiserver process to appear ...
	I1209 23:46:08.177233   16706 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:46:08.177269   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:46:08.177324   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:46:08.210360   16706 cri.go:89] found id: "2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b"
	I1209 23:46:08.210382   16706 cri.go:89] found id: ""
	I1209 23:46:08.210391   16706 logs.go:282] 1 containers: [2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b]
	I1209 23:46:08.210449   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:08.213668   16706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 23:46:08.213730   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:46:08.246282   16706 cri.go:89] found id: "ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e"
	I1209 23:46:08.246306   16706 cri.go:89] found id: ""
	I1209 23:46:08.246314   16706 logs.go:282] 1 containers: [ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e]
	I1209 23:46:08.246356   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:08.249585   16706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 23:46:08.249642   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:46:08.281761   16706 cri.go:89] found id: "077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce"
	I1209 23:46:08.281784   16706 cri.go:89] found id: ""
	I1209 23:46:08.281793   16706 logs.go:282] 1 containers: [077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce]
	I1209 23:46:08.281838   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:08.285092   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:46:08.285145   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:46:08.317956   16706 cri.go:89] found id: "ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b"
	I1209 23:46:08.317981   16706 cri.go:89] found id: ""
	I1209 23:46:08.317990   16706 logs.go:282] 1 containers: [ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b]
	I1209 23:46:08.318037   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:08.321242   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:46:08.321310   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:46:08.354116   16706 cri.go:89] found id: "82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b"
	I1209 23:46:08.354139   16706 cri.go:89] found id: ""
	I1209 23:46:08.354147   16706 logs.go:282] 1 containers: [82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b]
	I1209 23:46:08.354192   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:08.357614   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:46:08.357677   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:46:08.391134   16706 cri.go:89] found id: "25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b"
	I1209 23:46:08.391158   16706 cri.go:89] found id: ""
	I1209 23:46:08.391167   16706 logs.go:282] 1 containers: [25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b]
	I1209 23:46:08.391221   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:08.394525   16706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 23:46:08.394585   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:46:08.428821   16706 cri.go:89] found id: "4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697"
	I1209 23:46:08.428846   16706 cri.go:89] found id: ""
	I1209 23:46:08.428853   16706 logs.go:282] 1 containers: [4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697]
	I1209 23:46:08.428901   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:08.432240   16706 logs.go:123] Gathering logs for kubelet ...
	I1209 23:46:08.432262   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 23:46:08.513345   16706 logs.go:123] Gathering logs for dmesg ...
	I1209 23:46:08.513381   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:46:08.525375   16706 logs.go:123] Gathering logs for kube-apiserver [2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b] ...
	I1209 23:46:08.525400   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b"
	I1209 23:46:08.568002   16706 logs.go:123] Gathering logs for kube-scheduler [ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b] ...
	I1209 23:46:08.568032   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b"
	I1209 23:46:08.605653   16706 logs.go:123] Gathering logs for kube-proxy [82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b] ...
	I1209 23:46:08.605681   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b"
	I1209 23:46:08.637274   16706 logs.go:123] Gathering logs for CRI-O ...
	I1209 23:46:08.637299   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 23:46:08.709035   16706 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:46:08.709070   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 23:46:08.802753   16706 logs.go:123] Gathering logs for etcd [ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e] ...
	I1209 23:46:08.802781   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e"
	I1209 23:46:08.846140   16706 logs.go:123] Gathering logs for coredns [077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce] ...
	I1209 23:46:08.846172   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce"
	I1209 23:46:08.882461   16706 logs.go:123] Gathering logs for kube-controller-manager [25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b] ...
	I1209 23:46:08.882494   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b"
	I1209 23:46:08.937170   16706 logs.go:123] Gathering logs for kindnet [4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697] ...
	I1209 23:46:08.937202   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697"
	I1209 23:46:08.968760   16706 logs.go:123] Gathering logs for container status ...
	I1209 23:46:08.968792   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 23:46:11.510218   16706 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1209 23:46:11.515435   16706 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1209 23:46:11.516293   16706 api_server.go:141] control plane version: v1.31.2
	I1209 23:46:11.516318   16706 api_server.go:131] duration metric: took 3.339076574s to wait for apiserver health ...
	I1209 23:46:11.516329   16706 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:46:11.516356   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1209 23:46:11.516413   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1209 23:46:11.550773   16706 cri.go:89] found id: "2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b"
	I1209 23:46:11.550802   16706 cri.go:89] found id: ""
	I1209 23:46:11.550812   16706 logs.go:282] 1 containers: [2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b]
	I1209 23:46:11.550856   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:11.553990   16706 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1209 23:46:11.554073   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1209 23:46:11.585952   16706 cri.go:89] found id: "ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e"
	I1209 23:46:11.585980   16706 cri.go:89] found id: ""
	I1209 23:46:11.585990   16706 logs.go:282] 1 containers: [ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e]
	I1209 23:46:11.586037   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:11.589340   16706 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1209 23:46:11.589411   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1209 23:46:11.623262   16706 cri.go:89] found id: "077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce"
	I1209 23:46:11.623284   16706 cri.go:89] found id: ""
	I1209 23:46:11.623292   16706 logs.go:282] 1 containers: [077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce]
	I1209 23:46:11.623365   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:11.626752   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1209 23:46:11.626808   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1209 23:46:11.660667   16706 cri.go:89] found id: "ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b"
	I1209 23:46:11.660695   16706 cri.go:89] found id: ""
	I1209 23:46:11.660704   16706 logs.go:282] 1 containers: [ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b]
	I1209 23:46:11.660763   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:11.664205   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1209 23:46:11.664264   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1209 23:46:11.697948   16706 cri.go:89] found id: "82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b"
	I1209 23:46:11.697970   16706 cri.go:89] found id: ""
	I1209 23:46:11.697983   16706 logs.go:282] 1 containers: [82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b]
	I1209 23:46:11.698049   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:11.701640   16706 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1209 23:46:11.701704   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1209 23:46:11.735675   16706 cri.go:89] found id: "25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b"
	I1209 23:46:11.735699   16706 cri.go:89] found id: ""
	I1209 23:46:11.735706   16706 logs.go:282] 1 containers: [25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b]
	I1209 23:46:11.735768   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:11.739420   16706 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1209 23:46:11.739519   16706 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1209 23:46:11.773718   16706 cri.go:89] found id: "4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697"
	I1209 23:46:11.773738   16706 cri.go:89] found id: ""
	I1209 23:46:11.773745   16706 logs.go:282] 1 containers: [4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697]
	I1209 23:46:11.773786   16706 ssh_runner.go:195] Run: which crictl
	I1209 23:46:11.777093   16706 logs.go:123] Gathering logs for kubelet ...
	I1209 23:46:11.777117   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 23:46:11.857784   16706 logs.go:123] Gathering logs for describe nodes ...
	I1209 23:46:11.857817   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 23:46:11.958361   16706 logs.go:123] Gathering logs for kube-apiserver [2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b] ...
	I1209 23:46:11.958392   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b"
	I1209 23:46:12.001302   16706 logs.go:123] Gathering logs for etcd [ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e] ...
	I1209 23:46:12.001338   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e"
	I1209 23:46:12.044811   16706 logs.go:123] Gathering logs for kube-scheduler [ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b] ...
	I1209 23:46:12.044841   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b"
	I1209 23:46:12.084081   16706 logs.go:123] Gathering logs for kindnet [4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697] ...
	I1209 23:46:12.084112   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697"
	I1209 23:46:12.117877   16706 logs.go:123] Gathering logs for dmesg ...
	I1209 23:46:12.117904   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 23:46:12.129780   16706 logs.go:123] Gathering logs for coredns [077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce] ...
	I1209 23:46:12.129807   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce"
	I1209 23:46:12.164421   16706 logs.go:123] Gathering logs for kube-proxy [82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b] ...
	I1209 23:46:12.164449   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b"
	I1209 23:46:12.199121   16706 logs.go:123] Gathering logs for kube-controller-manager [25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b] ...
	I1209 23:46:12.199153   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b"
	I1209 23:46:12.254981   16706 logs.go:123] Gathering logs for CRI-O ...
	I1209 23:46:12.255018   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1209 23:46:12.332099   16706 logs.go:123] Gathering logs for container status ...
	I1209 23:46:12.332150   16706 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 23:46:14.886263   16706 system_pods.go:59] 19 kube-system pods found
	I1209 23:46:14.886302   16706 system_pods.go:61] "amd-gpu-device-plugin-d2s7j" [d66910fc-8153-4362-b58d-0c34ded7766f] Running
	I1209 23:46:14.886308   16706 system_pods.go:61] "coredns-7c65d6cfc9-cxp92" [747ebb1d-9978-4fa2-ab7e-103305601b72] Running
	I1209 23:46:14.886312   16706 system_pods.go:61] "csi-hostpath-attacher-0" [0b0e2443-64c0-4547-be9d-da1d058bf73d] Running
	I1209 23:46:14.886317   16706 system_pods.go:61] "csi-hostpath-resizer-0" [72a75e84-083b-4ef7-97db-f519225c8067] Running
	I1209 23:46:14.886320   16706 system_pods.go:61] "csi-hostpathplugin-zzv7p" [3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42] Running
	I1209 23:46:14.886323   16706 system_pods.go:61] "etcd-addons-701527" [c481e420-6a6c-40c8-a459-b2d1c2882635] Running
	I1209 23:46:14.886327   16706 system_pods.go:61] "kindnet-stv96" [257884a2-cdb5-4b33-a038-33b923fd7bc2] Running
	I1209 23:46:14.886330   16706 system_pods.go:61] "kube-apiserver-addons-701527" [8ad3ecac-c702-4d87-b7ae-05bd01c000a7] Running
	I1209 23:46:14.886334   16706 system_pods.go:61] "kube-controller-manager-addons-701527" [5aea4bee-b06d-48c0-9e63-fadf57fcfb4e] Running
	I1209 23:46:14.886337   16706 system_pods.go:61] "kube-ingress-dns-minikube" [a3ab45ca-887e-40ac-aa72-59e45aa061d9] Running
	I1209 23:46:14.886340   16706 system_pods.go:61] "kube-proxy-qh6vp" [c95618c3-d387-449b-8663-ee463b5f6629] Running
	I1209 23:46:14.886343   16706 system_pods.go:61] "kube-scheduler-addons-701527" [8d434eb3-1a1a-418c-9b15-bcaa57a93874] Running
	I1209 23:46:14.886346   16706 system_pods.go:61] "metrics-server-84c5f94fbc-5g27r" [9401b572-a33f-4211-a676-d07847671042] Running
	I1209 23:46:14.886349   16706 system_pods.go:61] "nvidia-device-plugin-daemonset-55d28" [8ebd5f2c-593c-4804-9e9f-91b53ea7fa82] Running
	I1209 23:46:14.886352   16706 system_pods.go:61] "registry-5cc95cd69-hqlfw" [e0b25e01-7672-4537-ae66-04da6fa6f483] Running
	I1209 23:46:14.886355   16706 system_pods.go:61] "registry-proxy-g2wbp" [1e1d2641-f760-4ea1-9dd2-8579da7521e1] Running
	I1209 23:46:14.886358   16706 system_pods.go:61] "snapshot-controller-56fcc65765-4c42c" [d6203375-ea1a-419a-966b-5e73e6464e19] Running
	I1209 23:46:14.886361   16706 system_pods.go:61] "snapshot-controller-56fcc65765-m42bz" [0cbc53f7-0944-4f38-b866-c24619921bf4] Running
	I1209 23:46:14.886363   16706 system_pods.go:61] "storage-provisioner" [1ad7ef52-a84e-493a-ba89-adee18311a9d] Running
	I1209 23:46:14.886369   16706 system_pods.go:74] duration metric: took 3.370034154s to wait for pod list to return data ...
	I1209 23:46:14.886379   16706 default_sa.go:34] waiting for default service account to be created ...
	I1209 23:46:14.888565   16706 default_sa.go:45] found service account: "default"
	I1209 23:46:14.888590   16706 default_sa.go:55] duration metric: took 2.20265ms for default service account to be created ...
	I1209 23:46:14.888601   16706 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 23:46:14.897479   16706 system_pods.go:86] 19 kube-system pods found
	I1209 23:46:14.897509   16706 system_pods.go:89] "amd-gpu-device-plugin-d2s7j" [d66910fc-8153-4362-b58d-0c34ded7766f] Running
	I1209 23:46:14.897515   16706 system_pods.go:89] "coredns-7c65d6cfc9-cxp92" [747ebb1d-9978-4fa2-ab7e-103305601b72] Running
	I1209 23:46:14.897519   16706 system_pods.go:89] "csi-hostpath-attacher-0" [0b0e2443-64c0-4547-be9d-da1d058bf73d] Running
	I1209 23:46:14.897523   16706 system_pods.go:89] "csi-hostpath-resizer-0" [72a75e84-083b-4ef7-97db-f519225c8067] Running
	I1209 23:46:14.897526   16706 system_pods.go:89] "csi-hostpathplugin-zzv7p" [3ffdcbe8-d1e6-4c70-bab8-fce95d2a2e42] Running
	I1209 23:46:14.897529   16706 system_pods.go:89] "etcd-addons-701527" [c481e420-6a6c-40c8-a459-b2d1c2882635] Running
	I1209 23:46:14.897533   16706 system_pods.go:89] "kindnet-stv96" [257884a2-cdb5-4b33-a038-33b923fd7bc2] Running
	I1209 23:46:14.897537   16706 system_pods.go:89] "kube-apiserver-addons-701527" [8ad3ecac-c702-4d87-b7ae-05bd01c000a7] Running
	I1209 23:46:14.897540   16706 system_pods.go:89] "kube-controller-manager-addons-701527" [5aea4bee-b06d-48c0-9e63-fadf57fcfb4e] Running
	I1209 23:46:14.897544   16706 system_pods.go:89] "kube-ingress-dns-minikube" [a3ab45ca-887e-40ac-aa72-59e45aa061d9] Running
	I1209 23:46:14.897548   16706 system_pods.go:89] "kube-proxy-qh6vp" [c95618c3-d387-449b-8663-ee463b5f6629] Running
	I1209 23:46:14.897558   16706 system_pods.go:89] "kube-scheduler-addons-701527" [8d434eb3-1a1a-418c-9b15-bcaa57a93874] Running
	I1209 23:46:14.897561   16706 system_pods.go:89] "metrics-server-84c5f94fbc-5g27r" [9401b572-a33f-4211-a676-d07847671042] Running
	I1209 23:46:14.897569   16706 system_pods.go:89] "nvidia-device-plugin-daemonset-55d28" [8ebd5f2c-593c-4804-9e9f-91b53ea7fa82] Running
	I1209 23:46:14.897574   16706 system_pods.go:89] "registry-5cc95cd69-hqlfw" [e0b25e01-7672-4537-ae66-04da6fa6f483] Running
	I1209 23:46:14.897580   16706 system_pods.go:89] "registry-proxy-g2wbp" [1e1d2641-f760-4ea1-9dd2-8579da7521e1] Running
	I1209 23:46:14.897583   16706 system_pods.go:89] "snapshot-controller-56fcc65765-4c42c" [d6203375-ea1a-419a-966b-5e73e6464e19] Running
	I1209 23:46:14.897588   16706 system_pods.go:89] "snapshot-controller-56fcc65765-m42bz" [0cbc53f7-0944-4f38-b866-c24619921bf4] Running
	I1209 23:46:14.897591   16706 system_pods.go:89] "storage-provisioner" [1ad7ef52-a84e-493a-ba89-adee18311a9d] Running
	I1209 23:46:14.897597   16706 system_pods.go:126] duration metric: took 8.990695ms to wait for k8s-apps to be running ...
	I1209 23:46:14.897605   16706 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 23:46:14.897686   16706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:46:14.908730   16706 system_svc.go:56] duration metric: took 11.112132ms WaitForService to wait for kubelet
	I1209 23:46:14.908758   16706 kubeadm.go:582] duration metric: took 1m49.660016176s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:46:14.908784   16706 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:46:14.911833   16706 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1209 23:46:14.911858   16706 node_conditions.go:123] node cpu capacity is 8
	I1209 23:46:14.911870   16706 node_conditions.go:105] duration metric: took 3.080426ms to run NodePressure ...
	I1209 23:46:14.911880   16706 start.go:241] waiting for startup goroutines ...
	I1209 23:46:14.911887   16706 start.go:246] waiting for cluster config update ...
	I1209 23:46:14.911901   16706 start.go:255] writing updated cluster config ...
	I1209 23:46:14.912162   16706 ssh_runner.go:195] Run: rm -f paused
	I1209 23:46:14.962210   16706 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 23:46:14.964349   16706 out.go:177] * Done! kubectl is now configured to use "addons-701527" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 23:49:45 addons-701527 crio[1028]: time="2024-12-09 23:49:45.673766828Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5f85ff4588-pbnrr Namespace:ingress-nginx ID:c934de94e4956057244f33cb087e00aca0f89f5e95561dccbfbc9dc2be91e02e UID:ce5b1db6-8d51-4760-bedf-8492e2030336 NetNS:/var/run/netns/6a3f00c2-0859-4143-8578-9899d51df5b1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 09 23:49:45 addons-701527 crio[1028]: time="2024-12-09 23:49:45.673885052Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5f85ff4588-pbnrr from CNI network \"kindnet\" (type=ptp)"
	Dec 09 23:49:45 addons-701527 crio[1028]: time="2024-12-09 23:49:45.700929078Z" level=info msg="Stopped pod sandbox: c934de94e4956057244f33cb087e00aca0f89f5e95561dccbfbc9dc2be91e02e" id=21841c41-bfb9-450e-9cfb-3216529fd936 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:49:45 addons-701527 crio[1028]: time="2024-12-09 23:49:45.994710874Z" level=info msg="Removing container: eb7e1db683204a72d129582b42fce9a2ce768d681c5179505b19c77698db3e7a" id=16857449-ef22-48a7-ae06-65ae758f4f49 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 23:49:46 addons-701527 crio[1028]: time="2024-12-09 23:49:46.008139359Z" level=info msg="Removed container eb7e1db683204a72d129582b42fce9a2ce768d681c5179505b19c77698db3e7a: ingress-nginx/ingress-nginx-controller-5f85ff4588-pbnrr/controller" id=16857449-ef22-48a7-ae06-65ae758f4f49 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.926272733Z" level=info msg="Removing container: f0e0fc3f457646d22161837f4b09e0d98e48f317212728321d6b0ff15592c0af" id=a80ce22e-6cd3-41fb-8190-722119085102 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.939952368Z" level=info msg="Removed container f0e0fc3f457646d22161837f4b09e0d98e48f317212728321d6b0ff15592c0af: ingress-nginx/ingress-nginx-admission-patch-gghlb/patch" id=a80ce22e-6cd3-41fb-8190-722119085102 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.940980393Z" level=info msg="Removing container: d7afc8b4ed276e0ef39d68b26c99135d7c4d4bdeedbfcf32f7482eb05db44d22" id=0f07c642-b1ce-437b-9ef7-d3e13fd785b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.955634137Z" level=info msg="Removed container d7afc8b4ed276e0ef39d68b26c99135d7c4d4bdeedbfcf32f7482eb05db44d22: ingress-nginx/ingress-nginx-admission-create-hnqc5/create" id=0f07c642-b1ce-437b-9ef7-d3e13fd785b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.956812055Z" level=info msg="Stopping pod sandbox: 92926b116445c491cd875db8407aeadbaa821ffe925bb903970306f2259d485e" id=d2bdd9e2-09dd-4c22-8bf1-316f145bce87 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.956847980Z" level=info msg="Stopped pod sandbox (already stopped): 92926b116445c491cd875db8407aeadbaa821ffe925bb903970306f2259d485e" id=d2bdd9e2-09dd-4c22-8bf1-316f145bce87 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.957104099Z" level=info msg="Removing pod sandbox: 92926b116445c491cd875db8407aeadbaa821ffe925bb903970306f2259d485e" id=d4638657-7077-46b7-a2f4-d773de88ef2c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.962806174Z" level=info msg="Removed pod sandbox: 92926b116445c491cd875db8407aeadbaa821ffe925bb903970306f2259d485e" id=d4638657-7077-46b7-a2f4-d773de88ef2c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.963279490Z" level=info msg="Stopping pod sandbox: 2a3fd7578af2ba1a123c7693663d482a093fe6c5831cb43225b094ca32c2021e" id=50339c5e-57f9-4e09-9f5b-86b951e3950a name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.963329089Z" level=info msg="Stopped pod sandbox (already stopped): 2a3fd7578af2ba1a123c7693663d482a093fe6c5831cb43225b094ca32c2021e" id=50339c5e-57f9-4e09-9f5b-86b951e3950a name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.963703028Z" level=info msg="Removing pod sandbox: 2a3fd7578af2ba1a123c7693663d482a093fe6c5831cb43225b094ca32c2021e" id=3df16676-6749-4bc3-a60a-27ab24bcc5ff name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.969492935Z" level=info msg="Removed pod sandbox: 2a3fd7578af2ba1a123c7693663d482a093fe6c5831cb43225b094ca32c2021e" id=3df16676-6749-4bc3-a60a-27ab24bcc5ff name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.969916645Z" level=info msg="Stopping pod sandbox: 235a47d127e9daf603c972f7ed60f070d5be66369a589661d45c68a0df601176" id=1d0cf0ae-9504-40b4-966e-a87140762acf name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.969953741Z" level=info msg="Stopped pod sandbox (already stopped): 235a47d127e9daf603c972f7ed60f070d5be66369a589661d45c68a0df601176" id=1d0cf0ae-9504-40b4-966e-a87140762acf name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.970210418Z" level=info msg="Removing pod sandbox: 235a47d127e9daf603c972f7ed60f070d5be66369a589661d45c68a0df601176" id=d88da6f4-0938-4330-b2fd-f6aa9a3e8c37 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.975868545Z" level=info msg="Removed pod sandbox: 235a47d127e9daf603c972f7ed60f070d5be66369a589661d45c68a0df601176" id=d88da6f4-0938-4330-b2fd-f6aa9a3e8c37 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.976263965Z" level=info msg="Stopping pod sandbox: c934de94e4956057244f33cb087e00aca0f89f5e95561dccbfbc9dc2be91e02e" id=a3a7a6e9-dcac-43ef-8bd2-f0a37cfb0381 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.976294730Z" level=info msg="Stopped pod sandbox (already stopped): c934de94e4956057244f33cb087e00aca0f89f5e95561dccbfbc9dc2be91e02e" id=a3a7a6e9-dcac-43ef-8bd2-f0a37cfb0381 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.976607629Z" level=info msg="Removing pod sandbox: c934de94e4956057244f33cb087e00aca0f89f5e95561dccbfbc9dc2be91e02e" id=43ce2c3d-50da-4b8e-952e-5b4cbf1290f9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 09 23:50:19 addons-701527 crio[1028]: time="2024-12-09 23:50:19.982976711Z" level=info msg="Removed pod sandbox: c934de94e4956057244f33cb087e00aca0f89f5e95561dccbfbc9dc2be91e02e" id=43ce2c3d-50da-4b8e-952e-5b4cbf1290f9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1964c247bf588       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   babd6ccc73c92       hello-world-app-55bf9c44b4-tqms7
	10645a732c52f       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         5 minutes ago       Running             nginx                     0                   0856bdf086c6e       nginx
	929f01b55a429       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   714101868ecf3       busybox
	6ab22756eac7d       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   7 minutes ago       Running             metrics-server            0                   1d734532e8663       metrics-server-84c5f94fbc-5g27r
	077b85dec6016       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   c7fdeb3ef79c5       coredns-7c65d6cfc9-cxp92
	c3dbc738cd674       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   51167656b2653       storage-provisioner
	4147aac866b08       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3                      8 minutes ago       Running             kindnet-cni               0                   9d6850a00e670       kindnet-stv96
	82e3678abb6c9       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        8 minutes ago       Running             kube-proxy                0                   aa81dd7c70085       kube-proxy-qh6vp
	25f645634229e       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        8 minutes ago       Running             kube-controller-manager   0                   bf9719be3d23e       kube-controller-manager-addons-701527
	2618b912b82c1       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        8 minutes ago       Running             kube-apiserver            0                   de652e399d7b7       kube-apiserver-addons-701527
	ae395d5ec54d0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   88e1369f9daff       etcd-addons-701527
	ce5eeb0ab7db8       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        8 minutes ago       Running             kube-scheduler            0                   103590d89ff0a       kube-scheduler-addons-701527
	
	
	==> coredns [077b85dec601666c931849a1ed7d1f0ca4b51894a0cd9c5efe46dbc1540a65ce] <==
	[INFO] 10.244.0.21:52884 - 54108 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00649632s
	[INFO] 10.244.0.21:42105 - 18210 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005298524s
	[INFO] 10.244.0.21:41124 - 27071 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006420103s
	[INFO] 10.244.0.21:58011 - 49059 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006217102s
	[INFO] 10.244.0.21:38735 - 31028 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005601169s
	[INFO] 10.244.0.21:36362 - 43549 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006184764s
	[INFO] 10.244.0.21:52884 - 31742 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006038936s
	[INFO] 10.244.0.21:46709 - 2873 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006405557s
	[INFO] 10.244.0.21:33930 - 41697 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006162307s
	[INFO] 10.244.0.21:41124 - 57471 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006525827s
	[INFO] 10.244.0.21:42105 - 19185 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006607431s
	[INFO] 10.244.0.21:36362 - 54769 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006490862s
	[INFO] 10.244.0.21:33930 - 15593 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006334137s
	[INFO] 10.244.0.21:46709 - 35023 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006314514s
	[INFO] 10.244.0.21:41124 - 18117 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000144065s
	[INFO] 10.244.0.21:58011 - 30542 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006701249s
	[INFO] 10.244.0.21:38735 - 34664 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006820043s
	[INFO] 10.244.0.21:42105 - 16845 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000167088s
	[INFO] 10.244.0.21:52884 - 49458 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006833507s
	[INFO] 10.244.0.21:58011 - 41298 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069276s
	[INFO] 10.244.0.21:33930 - 14742 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051934s
	[INFO] 10.244.0.21:38735 - 27083 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060181s
	[INFO] 10.244.0.21:46709 - 3023 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00012474s
	[INFO] 10.244.0.21:36362 - 14469 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000190772s
	[INFO] 10.244.0.21:52884 - 59270 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00007468s
	
	
	==> describe nodes <==
	Name:               addons-701527
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-701527
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=addons-701527
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T23_44_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-701527
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 23:44:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-701527
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 23:52:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 23:49:56 +0000   Mon, 09 Dec 2024 23:44:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 23:49:56 +0000   Mon, 09 Dec 2024 23:44:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 23:49:56 +0000   Mon, 09 Dec 2024 23:44:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 23:49:56 +0000   Mon, 09 Dec 2024 23:44:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-701527
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 775aa4cfeeac4fecbd38c1488c56dfa0
	  System UUID:                05d8adc9-27f0-43f6-9f5c-2780c35710f8
	  Boot ID:                    fcda772d-4207-4ab9-84d8-f9ba5cb81f2f
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  default                     hello-world-app-55bf9c44b4-tqms7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 coredns-7c65d6cfc9-cxp92                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m11s
	  kube-system                 etcd-addons-701527                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m17s
	  kube-system                 kindnet-stv96                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m11s
	  kube-system                 kube-apiserver-addons-701527             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 kube-controller-manager-addons-701527    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 kube-proxy-qh6vp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-scheduler-addons-701527             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 metrics-server-84c5f94fbc-5g27r          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         8m6s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 8m6s   kube-proxy       
	  Normal   Starting                 8m17s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m17s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m17s  kubelet          Node addons-701527 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m17s  kubelet          Node addons-701527 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m17s  kubelet          Node addons-701527 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m12s  node-controller  Node addons-701527 event: Registered Node addons-701527 in Controller
	  Normal   NodeReady                7m52s  kubelet          Node addons-701527 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000758] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.005178] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001365] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.645483] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025447] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.034285] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.032948] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +7.141697] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 9 23:47] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[  +1.015721] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[  +2.011802] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[  +4.127509] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[  +8.191113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[ +16.130221] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[Dec 9 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	
	
	==> etcd [ae395d5ec54d0a2e6095de65e0ac333727786c97a612f22ea881a819a91df09e] <==
	{"level":"info","ts":"2024-12-09T23:44:28.188269Z","caller":"traceutil/trace.go:171","msg":"trace[325315929] transaction","detail":"{read_only:false; number_of_response:1; response_revision:387; }","duration":"203.880945ms","start":"2024-12-09T23:44:27.984378Z","end":"2024-12-09T23:44:28.188259Z","steps":["trace[325315929] 'process raft request'  (duration: 201.436967ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:28.303994Z","caller":"traceutil/trace.go:171","msg":"trace[1532357115] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"119.952721ms","start":"2024-12-09T23:44:28.184000Z","end":"2024-12-09T23:44:28.303952Z","steps":["trace[1532357115] 'process raft request'  (duration: 105.19725ms)","trace[1532357115] 'compare'  (duration: 12.783026ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:44:28.304566Z","caller":"traceutil/trace.go:171","msg":"trace[393444608] linearizableReadLoop","detail":"{readStateIndex:399; appliedIndex:398; }","duration":"116.642703ms","start":"2024-12-09T23:44:28.187909Z","end":"2024-12-09T23:44:28.304551Z","steps":["trace[393444608] 'read index received'  (duration: 102.146105ms)","trace[393444608] 'applied index is now lower than readState.Index'  (duration: 14.481527ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:44:28.304741Z","caller":"traceutil/trace.go:171","msg":"trace[1793983851] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"115.980584ms","start":"2024-12-09T23:44:28.188750Z","end":"2024-12-09T23:44:28.304731Z","steps":["trace[1793983851] 'process raft request'  (duration: 113.368616ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:28.306758Z","caller":"traceutil/trace.go:171","msg":"trace[1446547101] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"117.685631ms","start":"2024-12-09T23:44:28.189051Z","end":"2024-12-09T23:44:28.306737Z","steps":["trace[1446547101] 'process raft request'  (duration: 113.110432ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:28.306955Z","caller":"traceutil/trace.go:171","msg":"trace[897928018] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"112.895918ms","start":"2024-12-09T23:44:28.194051Z","end":"2024-12-09T23:44:28.306947Z","steps":["trace[897928018] 'process raft request'  (duration: 108.142976ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:28.307052Z","caller":"traceutil/trace.go:171","msg":"trace[329491598] transaction","detail":"{read_only:false; number_of_response:1; response_revision:393; }","duration":"112.966783ms","start":"2024-12-09T23:44:28.194079Z","end":"2024-12-09T23:44:28.307046Z","steps":["trace[329491598] 'process raft request'  (duration: 109.094567ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:44:28.383748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.792811ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4096"}
	{"level":"info","ts":"2024-12-09T23:44:28.383885Z","caller":"traceutil/trace.go:171","msg":"trace[1601394470] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:394; }","duration":"195.962881ms","start":"2024-12-09T23:44:28.187904Z","end":"2024-12-09T23:44:28.383867Z","steps":["trace[1601394470] 'agreement among raft nodes before linearized reading'  (duration: 119.576811ms)","trace[1601394470] 'range keys from bolt db'  (duration: 76.184625ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:44:28.401149Z","caller":"traceutil/trace.go:171","msg":"trace[190457932] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"103.731437ms","start":"2024-12-09T23:44:28.297399Z","end":"2024-12-09T23:44:28.401131Z","steps":["trace[190457932] 'process raft request'  (duration: 103.465483ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:44:28.401354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.523072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-701527\" ","response":"range_response_count:1 size:5655"}
	{"level":"info","ts":"2024-12-09T23:44:28.401460Z","caller":"traceutil/trace.go:171","msg":"trace[862798791] range","detail":"{range_begin:/registry/minions/addons-701527; range_end:; response_count:1; response_revision:395; }","duration":"212.639122ms","start":"2024-12-09T23:44:28.188810Z","end":"2024-12-09T23:44:28.401449Z","steps":["trace[862798791] 'agreement among raft nodes before linearized reading'  (duration: 212.493838ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:44:28.600604Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.504839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-09T23:44:28.600686Z","caller":"traceutil/trace.go:171","msg":"trace[181894478] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:402; }","duration":"107.601345ms","start":"2024-12-09T23:44:28.493054Z","end":"2024-12-09T23:44:28.600655Z","steps":["trace[181894478] 'agreement among raft nodes before linearized reading'  (duration: 107.423959ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:44:28.600975Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.951275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-701527\" ","response":"range_response_count:1 size:5655"}
	{"level":"info","ts":"2024-12-09T23:44:28.601109Z","caller":"traceutil/trace.go:171","msg":"trace[244838227] range","detail":"{range_begin:/registry/minions/addons-701527; range_end:; response_count:1; response_revision:402; }","duration":"108.059838ms","start":"2024-12-09T23:44:28.493012Z","end":"2024-12-09T23:44:28.601072Z","steps":["trace[244838227] 'agreement among raft nodes before linearized reading'  (duration: 107.923039ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:44:28.601462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.33806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:44:28.601552Z","caller":"traceutil/trace.go:171","msg":"trace[55582856] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/traces.gadget.kinvolk.io; range_end:; response_count:0; response_revision:402; }","duration":"108.428175ms","start":"2024-12-09T23:44:28.493113Z","end":"2024-12-09T23:44:28.601541Z","steps":["trace[55582856] 'agreement among raft nodes before linearized reading'  (duration: 108.303748ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:29.000571Z","caller":"traceutil/trace.go:171","msg":"trace[149294942] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"100.040439ms","start":"2024-12-09T23:44:28.900518Z","end":"2024-12-09T23:44:29.000559Z","steps":["trace[149294942] 'process raft request'  (duration: 87.770465ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:29.099884Z","caller":"traceutil/trace.go:171","msg":"trace[537937198] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"106.241349ms","start":"2024-12-09T23:44:28.993626Z","end":"2024-12-09T23:44:29.099867Z","steps":["trace[537937198] 'process raft request'  (duration: 91.607451ms)","trace[537937198] 'compare'  (duration: 14.381518ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T23:44:29.100414Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.225693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-701527\" ","response":"range_response_count:1 size:5655"}
	{"level":"info","ts":"2024-12-09T23:44:29.100709Z","caller":"traceutil/trace.go:171","msg":"trace[1047754935] range","detail":"{range_begin:/registry/minions/addons-701527; range_end:; response_count:1; response_revision:417; }","duration":"100.512235ms","start":"2024-12-09T23:44:29.000170Z","end":"2024-12-09T23:44:29.100682Z","steps":["trace[1047754935] 'agreement among raft nodes before linearized reading'  (duration: 100.0103ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:29.101046Z","caller":"traceutil/trace.go:171","msg":"trace[1793953] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"100.789141ms","start":"2024-12-09T23:44:29.000245Z","end":"2024-12-09T23:44:29.101034Z","steps":["trace[1793953] 'process raft request'  (duration: 100.511212ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:44:29.102030Z","caller":"traceutil/trace.go:171","msg":"trace[136168466] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"101.490397ms","start":"2024-12-09T23:44:29.000529Z","end":"2024-12-09T23:44:29.102019Z","steps":["trace[136168466] 'process raft request'  (duration: 100.315559ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:45:49.656961Z","caller":"traceutil/trace.go:171","msg":"trace[1818210935] transaction","detail":"{read_only:false; response_revision:1211; number_of_response:1; }","duration":"106.060224ms","start":"2024-12-09T23:45:49.550884Z","end":"2024-12-09T23:45:49.656944Z","steps":["trace[1818210935] 'process raft request'  (duration: 105.954366ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:52:36 up 35 min,  0 users,  load average: 0.17, 0.43, 0.34
	Linux addons-701527 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4147aac866b089e0b890de4c9890a448974afcebad4fdb81e84e434d3f0ef697] <==
	I1209 23:50:33.784762       1 main.go:301] handling current node
	I1209 23:50:43.791602       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:50:43.791638       1 main.go:301] handling current node
	I1209 23:50:53.792771       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:50:53.792809       1 main.go:301] handling current node
	I1209 23:51:03.793912       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:51:03.793947       1 main.go:301] handling current node
	I1209 23:51:13.791749       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:51:13.791782       1 main.go:301] handling current node
	I1209 23:51:23.786363       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:51:23.786427       1 main.go:301] handling current node
	I1209 23:51:33.784730       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:51:33.784769       1 main.go:301] handling current node
	I1209 23:51:43.784683       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:51:43.784721       1 main.go:301] handling current node
	I1209 23:51:53.787315       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:51:53.787374       1 main.go:301] handling current node
	I1209 23:52:03.791615       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:52:03.791656       1 main.go:301] handling current node
	I1209 23:52:13.793994       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:52:13.794046       1 main.go:301] handling current node
	I1209 23:52:23.787575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:52:23.787625       1 main.go:301] handling current node
	I1209 23:52:33.784769       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:52:33.784811       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2618b912b82c10d5f222bfaff9643d83dfae4e0e78581b1cd2ab2adcf04cba8b] <==
	E1209 23:46:04.409438       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.68.101:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.68.101:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.68.101:443: connect: connection refused" logger="UnhandledError"
	I1209 23:46:04.440705       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1209 23:46:22.656162       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58740: use of closed network connection
	E1209 23:46:22.815646       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58772: use of closed network connection
	I1209 23:46:31.754419       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.47.197"}
	I1209 23:46:37.493599       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1209 23:46:38.609504       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1209 23:47:02.124474       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1209 23:47:14.982082       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1209 23:47:18.162100       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 23:47:18.324563       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.4.42"}
	I1209 23:47:21.753833       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:21.753877       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:21.802327       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:21.802524       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:21.888483       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:21.888532       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:21.889403       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:21.889507       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:21.909524       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:21.909567       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1209 23:47:22.888525       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1209 23:47:22.909765       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1209 23:47:23.006378       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1209 23:49:38.100259       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.75.201"}
	
	
	==> kube-controller-manager [25f645634229efe3e653268b9741eae7453fce1c37d63abde08664d3250c975b] <==
	E1209 23:50:30.645308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:50:51.703643       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:50:51.703689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:50:53.660629       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:50:53.660679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:50:59.013176       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:50:59.013215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:51:14.244438       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:51:14.244479       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:51:26.788905       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:51:26.788960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:51:29.337441       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:51:29.337482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:51:48.098523       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:51:48.098565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:51:53.917464       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:51:53.917513       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:52:00.081102       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:52:00.081145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:52:12.506912       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:52:12.506955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:52:27.465416       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:52:27.465466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:52:30.170123       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:52:30.170169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [82e3678abb6c9b12f6ffdaaf6056d6ca675089fbc96ffc6b1f11bbb91e337e9b] <==
	I1209 23:44:27.802596       1 server_linux.go:66] "Using iptables proxy"
	I1209 23:44:29.108824       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1209 23:44:29.108882       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 23:44:29.684759       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 23:44:29.684881       1 server_linux.go:169] "Using iptables Proxier"
	I1209 23:44:29.699616       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 23:44:29.700499       1 server.go:483] "Version info" version="v1.31.2"
	I1209 23:44:29.700597       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:44:29.704576       1 config.go:105] "Starting endpoint slice config controller"
	I1209 23:44:29.704686       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 23:44:29.704767       1 config.go:199] "Starting service config controller"
	I1209 23:44:29.704800       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 23:44:29.704988       1 config.go:328] "Starting node config controller"
	I1209 23:44:29.705074       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 23:44:29.805457       1 shared_informer.go:320] Caches are synced for service config
	I1209 23:44:29.805650       1 shared_informer.go:320] Caches are synced for node config
	I1209 23:44:29.805703       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ce5eeb0ab7db86eb4522327cf1201850405e08e9b448060ced4fa934b577678b] <==
	W1209 23:44:17.595236       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1209 23:44:17.595330       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1209 23:44:17.595347       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1209 23:44:17.595358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.595235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1209 23:44:17.595379       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E1209 23:44:17.595395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.595398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1209 23:44:17.595354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1209 23:44:17.595428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.595431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1209 23:44:17.595479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 23:44:17.595525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1209 23:44:17.595483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.595591       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 23:44:17.595622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:18.410848       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1209 23:44:18.410886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:18.419558       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 23:44:18.419592       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1209 23:44:18.624088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 23:44:18.624127       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:18.642537       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 23:44:18.642575       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1209 23:44:20.092161       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 23:50:49 addons-701527 kubelet[1635]: E1209 23:50:49.860764    1635 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788249860542559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:50:49 addons-701527 kubelet[1635]: E1209 23:50:49.860834    1635 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788249860542559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:50:59 addons-701527 kubelet[1635]: E1209 23:50:59.863409    1635 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788259863137240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:50:59 addons-701527 kubelet[1635]: E1209 23:50:59.863449    1635 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788259863137240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:09 addons-701527 kubelet[1635]: E1209 23:51:09.865703    1635 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788269865510017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:09 addons-701527 kubelet[1635]: E1209 23:51:09.865750    1635 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788269865510017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:13 addons-701527 kubelet[1635]: I1209 23:51:13.670991    1635 reconciler_common.go:281] "operationExecutor.UnmountDevice started for volume \"pvc-956545b7-c0cf-4557-897d-c2b7bd665aa3\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^dbe356fe-b687-11ef-9263-22c3385a1985\") on node \"addons-701527\" "
	Dec 09 23:51:13 addons-701527 kubelet[1635]: E1209 23:51:13.672355    1635 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/hostpath.csi.k8s.io^dbe356fe-b687-11ef-9263-22c3385a1985 podName: nodeName:}" failed. No retries permitted until 2024-12-09 23:53:15.672337617 +0000 UTC m=+536.110564815 (durationBeforeRetry 2m2s). Error: UnmountDevice failed for volume "pvc-956545b7-c0cf-4557-897d-c2b7bd665aa3" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^dbe356fe-b687-11ef-9263-22c3385a1985") on node "addons-701527" : kubernetes.io/csi: attacher.UnmountDevice failed to create newCsiDriverClient: driver name hostpath.csi.k8s.io not found in the list of registered CSI drivers
	Dec 09 23:51:19 addons-701527 kubelet[1635]: E1209 23:51:19.868599    1635 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788279868385980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:19 addons-701527 kubelet[1635]: E1209 23:51:19.868631    1635 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788279868385980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:29 addons-701527 kubelet[1635]: E1209 23:51:29.870879    1635 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788289870616866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:29 addons-701527 kubelet[1635]: E1209 23:51:29.870912    1635 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788289870616866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:39 addons-701527 kubelet[1635]: E1209 23:51:39.873509    1635 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788299873278624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:39 addons-701527 kubelet[1635]: E1209 23:51:39.873544    1635 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788299873278624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:49 addons-701527 kubelet[1635]: E1209 23:51:49.876111    1635 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788309875886354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:49 addons-701527 kubelet[1635]: E1209 23:51:49.876145    1635 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788309875886354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:56 addons-701527 kubelet[1635]: I1209 23:51:56.637651    1635 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 23:51:59 addons-701527 kubelet[1635]: E1209 23:51:59.878962    1635 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788319878718845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:59 addons-701527 kubelet[1635]: E1209 23:51:59.879009    1635 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788319878718845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:52:09 addons-701527 kubelet[1635]: E1209 23:52:09.881659    1635 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788329881433188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:52:09 addons-701527 kubelet[1635]: E1209 23:52:09.881690    1635 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788329881433188,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:52:19 addons-701527 kubelet[1635]: E1209 23:52:19.884397    1635 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788339884159115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:52:19 addons-701527 kubelet[1635]: E1209 23:52:19.884430    1635 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788339884159115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:52:29 addons-701527 kubelet[1635]: E1209 23:52:29.886518    1635 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788349886279172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:52:29 addons-701527 kubelet[1635]: E1209 23:52:29.886549    1635 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788349886279172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:626970,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c3dbc738cd6745fb4f46c65bb5828e5e2bb14cd4b9623bc4688e129fedfe42e5] <==
	I1209 23:44:44.991990       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:44:44.999539       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:44:44.999596       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 23:44:45.008366       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 23:44:45.008511       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b90eb7c8-2ae9-4bf6-89db-add0f773b69f", APIVersion:"v1", ResourceVersion:"890", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-701527_47b9cfa5-c4a3-4d5d-adbb-82aacbcfae88 became leader
	I1209 23:44:45.008555       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-701527_47b9cfa5-c4a3-4d5d-adbb-82aacbcfae88!
	I1209 23:44:45.109376       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-701527_47b9cfa5-c4a3-4d5d-adbb-82aacbcfae88!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-701527 -n addons-701527
helpers_test.go:261: (dbg) Run:  kubectl --context addons-701527 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (366.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-113090 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-qdsk6" [bbdea9c7-f8a5-498f-a3a8-e10ee2a31a61] Pending
helpers_test.go:344: "mysql-6cdb49bbb-qdsk6" [bbdea9c7-f8a5-498f-a3a8-e10ee2a31a61] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1799: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1799: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-113090 -n functional-113090
functional_test.go:1799: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2024-12-10 00:05:59.038703967 +0000 UTC m=+1352.184573644
functional_test.go:1799: (dbg) Run:  kubectl --context functional-113090 describe po mysql-6cdb49bbb-qdsk6 -n default
functional_test.go:1799: (dbg) kubectl --context functional-113090 describe po mysql-6cdb49bbb-qdsk6 -n default:
Name:             mysql-6cdb49bbb-qdsk6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-113090/192.168.49.2
Start Time:       Mon, 09 Dec 2024 23:55:58 +0000
Labels:           app=mysql
pod-template-hash=6cdb49bbb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/mysql-6cdb49bbb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mt5pg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-mt5pg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-6cdb49bbb-qdsk6 to functional-113090
Normal   Pulling    7m5s (x4 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     6m34s (x4 over 9m30s)  kubelet            Error: ErrImagePull
Warning  Failed     6m34s (x2 over 8m45s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     6m10s (x6 over 9m29s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    5m58s (x7 over 9m29s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     4m33s (x3 over 9m30s)  kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
functional_test.go:1799: (dbg) Run:  kubectl --context functional-113090 logs mysql-6cdb49bbb-qdsk6 -n default
functional_test.go:1799: (dbg) Non-zero exit: kubectl --context functional-113090 logs mysql-6cdb49bbb-qdsk6 -n default: exit status 1 (67.995565ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-6cdb49bbb-qdsk6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1799: kubectl --context functional-113090 logs mysql-6cdb49bbb-qdsk6 -n default: exit status 1
functional_test.go:1801: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-113090
helpers_test.go:235: (dbg) docker inspect functional-113090:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afe05cf6c38af958612b6f75979c02308d9a71092bad4d4f38a2a3c772f45f8f",
	        "Created": "2024-12-09T23:53:32.545612687Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 41365,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-09T23:53:32.658348168Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/afe05cf6c38af958612b6f75979c02308d9a71092bad4d4f38a2a3c772f45f8f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afe05cf6c38af958612b6f75979c02308d9a71092bad4d4f38a2a3c772f45f8f/hostname",
	        "HostsPath": "/var/lib/docker/containers/afe05cf6c38af958612b6f75979c02308d9a71092bad4d4f38a2a3c772f45f8f/hosts",
	        "LogPath": "/var/lib/docker/containers/afe05cf6c38af958612b6f75979c02308d9a71092bad4d4f38a2a3c772f45f8f/afe05cf6c38af958612b6f75979c02308d9a71092bad4d4f38a2a3c772f45f8f-json.log",
	        "Name": "/functional-113090",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-113090:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-113090",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1cf108bf8d98a654711468528a31f699c4c4125bd18dcec6cf25cf5fa1fbcef7-init/diff:/var/lib/docker/overlay2/ab6cf1b3d2a8cc4179735a54668a5a4ec060988eb25398d5edaaa8c4eb9fdd94/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1cf108bf8d98a654711468528a31f699c4c4125bd18dcec6cf25cf5fa1fbcef7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1cf108bf8d98a654711468528a31f699c4c4125bd18dcec6cf25cf5fa1fbcef7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1cf108bf8d98a654711468528a31f699c4c4125bd18dcec6cf25cf5fa1fbcef7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-113090",
	                "Source": "/var/lib/docker/volumes/functional-113090/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-113090",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-113090",
	                "name.minikube.sigs.k8s.io": "functional-113090",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "121a2a686c5567eca900bdf0af8bc79b14d8ebb893db29d0c9b08c821d187804",
	            "SandboxKey": "/var/run/docker/netns/121a2a686c55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-113090": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "bd18b9cba032992e5de839c270a398f62eaed29bb61c91587a73be2eb9a61565",
	                    "EndpointID": "2fa4cc70714449e5ddd2671d96df057767181d9c1d7aa1683f3bae4f0364696b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-113090",
	                        "afe05cf6c38a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-113090 -n functional-113090
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-113090 logs -n 25: (1.425892766s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-113090 ssh stat                                               | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|                | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| image          | functional-113090                                                        | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-113090 ssh sudo                                               | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| update-context | functional-113090                                                        | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:55 UTC | 09 Dec 24 23:55 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| ssh            | functional-113090 ssh pgrep                                              | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| ssh            | functional-113090 ssh findmnt                                            | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-113090                                                     | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port3203544953/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| image          | functional-113090 image build -t                                         | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC | 09 Dec 24 23:56 UTC |
	|                | localhost/my-image:functional-113090                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| ssh            | functional-113090 ssh findmnt                                            | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC | 09 Dec 24 23:56 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-113090 ssh -- ls                                              | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC | 09 Dec 24 23:56 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-113090 ssh sudo                                               | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-113090                                                     | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup704133726/001:/mount3    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-113090                                                     | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup704133726/001:/mount1    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-113090                                                     | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup704133726/001:/mount2    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-113090 ssh findmnt                                            | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| image          | functional-113090 image ls                                               | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC | 09 Dec 24 23:56 UTC |
	| image          | functional-113090                                                        | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC | 09 Dec 24 23:56 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-113090                                                        | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC | 09 Dec 24 23:56 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-113090                                                        | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC | 09 Dec 24 23:56 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-113090 ssh findmnt                                            | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC | 09 Dec 24 23:56 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| update-context | functional-113090                                                        | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC | 09 Dec 24 23:56 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| ssh            | functional-113090 ssh findmnt                                            | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC | 09 Dec 24 23:56 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| update-context | functional-113090                                                        | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC | 09 Dec 24 23:56 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| ssh            | functional-113090 ssh findmnt                                            | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC | 09 Dec 24 23:56 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-113090                                                     | functional-113090 | jenkins | v1.34.0 | 09 Dec 24 23:56 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:55:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:55:46.595104   53647 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:55:46.595240   53647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:46.595251   53647 out.go:358] Setting ErrFile to fd 2...
	I1209 23:55:46.595258   53647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:46.595615   53647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
	I1209 23:55:46.596253   53647 out.go:352] Setting JSON to false
	I1209 23:55:46.597230   53647 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2294,"bootTime":1733786253,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:55:46.597340   53647 start.go:139] virtualization: kvm guest
	I1209 23:55:46.599257   53647 out.go:177] * [functional-113090] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1209 23:55:46.600767   53647 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 23:55:46.600798   53647 notify.go:220] Checking for updates...
	I1209 23:55:46.603551   53647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:55:46.605284   53647 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig
	I1209 23:55:46.611814   53647 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube
	I1209 23:55:46.613454   53647 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:55:46.614901   53647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:55:46.617073   53647 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:55:46.617579   53647 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:55:46.645254   53647 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:55:46.645364   53647 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:55:46.698483   53647 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-09 23:55:46.688895802 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:55:46.698592   53647 docker.go:318] overlay module found
	I1209 23:55:46.702241   53647 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1209 23:55:46.703923   53647 start.go:297] selected driver: docker
	I1209 23:55:46.703943   53647 start.go:901] validating driver "docker" against &{Name:functional-113090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-113090 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:46.704090   53647 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:55:46.706449   53647 out.go:201] 
	W1209 23:55:46.708099   53647 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 23:55:46.709703   53647 out.go:201] 
	
	
	==> CRI-O <==
	Dec 10 00:03:21 functional-113090 crio[5484]: time="2024-12-10 00:03:21.896210373Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=e5760bc8-4b13-48ef-89b7-3feb5331d1b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:03:21 functional-113090 crio[5484]: time="2024-12-10 00:03:21.896486625Z" level=info msg="Image docker.io/mysql:5.7 not found" id=e5760bc8-4b13-48ef-89b7-3feb5331d1b7 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:03:35 functional-113090 crio[5484]: time="2024-12-10 00:03:35.896393570Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=acb4ab1f-7c54-434e-b546-1db3f19438cc name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:03:35 functional-113090 crio[5484]: time="2024-12-10 00:03:35.896600566Z" level=info msg="Image docker.io/mysql:5.7 not found" id=acb4ab1f-7c54-434e-b546-1db3f19438cc name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:03:48 functional-113090 crio[5484]: time="2024-12-10 00:03:48.896503319Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=333c0967-a70f-4f13-ba06-1d2cfe0dca02 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:03:48 functional-113090 crio[5484]: time="2024-12-10 00:03:48.896769615Z" level=info msg="Image docker.io/mysql:5.7 not found" id=333c0967-a70f-4f13-ba06-1d2cfe0dca02 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:04:00 functional-113090 crio[5484]: time="2024-12-10 00:04:00.896485691Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=46c51e46-c86b-42f7-b7a7-01628c99f8c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:04:00 functional-113090 crio[5484]: time="2024-12-10 00:04:00.896737163Z" level=info msg="Image docker.io/mysql:5.7 not found" id=46c51e46-c86b-42f7-b7a7-01628c99f8c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:04:13 functional-113090 crio[5484]: time="2024-12-10 00:04:13.896223503Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=84760a20-56a6-4611-9047-e406c50ea118 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:04:13 functional-113090 crio[5484]: time="2024-12-10 00:04:13.896453999Z" level=info msg="Image docker.io/mysql:5.7 not found" id=84760a20-56a6-4611-9047-e406c50ea118 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:04:13 functional-113090 crio[5484]: time="2024-12-10 00:04:13.897025090Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=8064bd98-ca8c-47b7-9c0b-6156a16c97ba name=/runtime.v1.ImageService/PullImage
	Dec 10 00:04:13 functional-113090 crio[5484]: time="2024-12-10 00:04:13.912195303Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Dec 10 00:04:14 functional-113090 crio[5484]: time="2024-12-10 00:04:14.378404730Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Dec 10 00:04:58 functional-113090 crio[5484]: time="2024-12-10 00:04:58.896669430Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=de28624f-5d23-44e7-b430-43ed787a279a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:04:58 functional-113090 crio[5484]: time="2024-12-10 00:04:58.896922713Z" level=info msg="Image docker.io/mysql:5.7 not found" id=de28624f-5d23-44e7-b430-43ed787a279a name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:05:12 functional-113090 crio[5484]: time="2024-12-10 00:05:12.896731697Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=28726112-6c7a-4268-af2f-3c654a0a827e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:05:12 functional-113090 crio[5484]: time="2024-12-10 00:05:12.897001353Z" level=info msg="Image docker.io/mysql:5.7 not found" id=28726112-6c7a-4268-af2f-3c654a0a827e name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:05:23 functional-113090 crio[5484]: time="2024-12-10 00:05:23.895844841Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=9e27e7ad-2c27-4a51-9048-ba46432bcf51 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:05:23 functional-113090 crio[5484]: time="2024-12-10 00:05:23.896125343Z" level=info msg="Image docker.io/mysql:5.7 not found" id=9e27e7ad-2c27-4a51-9048-ba46432bcf51 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:05:34 functional-113090 crio[5484]: time="2024-12-10 00:05:34.896675387Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=ecf227d9-f53c-492d-a099-4dd3a7fd257d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:05:34 functional-113090 crio[5484]: time="2024-12-10 00:05:34.896945066Z" level=info msg="Image docker.io/mysql:5.7 not found" id=ecf227d9-f53c-492d-a099-4dd3a7fd257d name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:05:46 functional-113090 crio[5484]: time="2024-12-10 00:05:46.896595800Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=0e6c692e-4317-4e37-beef-b424d8dff2e6 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:05:46 functional-113090 crio[5484]: time="2024-12-10 00:05:46.896864787Z" level=info msg="Image docker.io/mysql:5.7 not found" id=0e6c692e-4317-4e37-beef-b424d8dff2e6 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:05:57 functional-113090 crio[5484]: time="2024-12-10 00:05:57.896949870Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=e8e72ec5-959d-4710-88e8-e36413c15556 name=/runtime.v1.ImageService/ImageStatus
	Dec 10 00:05:57 functional-113090 crio[5484]: time="2024-12-10 00:05:57.897188329Z" level=info msg="Image docker.io/mysql:5.7 not found" id=e8e72ec5-959d-4710-88e8-e36413c15556 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	cdbbe5893155c       docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42                  10 minutes ago      Running             myfrontend                  0                   dec018b0fd28c       sp-pod
	3c679164adeec       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   ee1af76d56bc0       busybox-mount
	b717d218efd4a       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   84ee48c8ea190       dashboard-metrics-scraper-c5db448b4-tqv5x
	4df9cb9d81020       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         10 minutes ago      Running             kubernetes-dashboard        0                   a093ce930ca04       kubernetes-dashboard-695b96c756-jk6md
	9da8cbbe3a8f7       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                  10 minutes ago      Running             nginx                       0                   edfe0f658c729       nginx-svc
	46cc24460615e       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   9d388767425dc       hello-node-connect-67bdd5bbb4-78b7j
	8ef499055e31b       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   c9185db339240       hello-node-6b9f76b5c7-2xchw
	18e9961e3dc11       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 10 minutes ago      Running             coredns                     3                   561ba6e34b33d       coredns-7c65d6cfc9-gvrxv
	729152f9155d3       50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e                                                 10 minutes ago      Running             kindnet-cni                 3                   dee2d8bb26bf4       kindnet-4h69s
	eccc540693103       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         4                   a63371c064324       storage-provisioner
	6857dfd57a5b5       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                 10 minutes ago      Running             kube-proxy                  3                   5a95faa2f21f5       kube-proxy-mhdsf
	224c9d630866c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                 10 minutes ago      Running             kube-apiserver              0                   499e57015a556       kube-apiserver-functional-113090
	d3f50b6c67aab       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                 10 minutes ago      Running             kube-scheduler              3                   532961a55050d       kube-scheduler-functional-113090
	8153cb42f1180       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 10 minutes ago      Running             etcd                        3                   73b711ca6d5d3       etcd-functional-113090
	e4e157e4f63be       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                 10 minutes ago      Running             kube-controller-manager     3                   ce9e210ed5cec       kube-controller-manager-functional-113090
	031b5d00932ab       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         3                   a63371c064324       storage-provisioner
	f946922fdd89a       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                 11 minutes ago      Exited              kube-proxy                  2                   5a95faa2f21f5       kube-proxy-mhdsf
	3007c2f2c8947       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 11 minutes ago      Exited              coredns                     2                   561ba6e34b33d       coredns-7c65d6cfc9-gvrxv
	d362f81d8b728       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                 11 minutes ago      Exited              kube-scheduler              2                   532961a55050d       kube-scheduler-functional-113090
	df8e2e02d65fa       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 11 minutes ago      Exited              etcd                        2                   73b711ca6d5d3       etcd-functional-113090
	0ad307862ee37       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                 11 minutes ago      Exited              kube-controller-manager     2                   ce9e210ed5cec       kube-controller-manager-functional-113090
	f517fec5533ac       50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e                                                 11 minutes ago      Exited              kindnet-cni                 2                   dee2d8bb26bf4       kindnet-4h69s
	
	
	==> coredns [18e9961e3dc11b7bf6359a50ae2950b5fc7d0da64fdd2a61d9d9d60765840397] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53225 - 21422 "HINFO IN 2830511807742814528.4312167940381333676. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028992624s
	
	
	==> coredns [3007c2f2c8947c932bab9d219cc9cbb24a5e577f93750e71f63b11b183e1fd76] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48734 - 49269 "HINFO IN 263119974023674787.2945277760157077023. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.032694964s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-113090
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-113090
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=functional-113090
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T23_53_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 23:53:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-113090
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:05:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:01:21 +0000   Mon, 09 Dec 2024 23:53:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:01:21 +0000   Mon, 09 Dec 2024 23:53:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:01:21 +0000   Mon, 09 Dec 2024 23:53:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:01:21 +0000   Mon, 09 Dec 2024 23:54:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-113090
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 9aa3d2474df844498d2eb2a85807fec3
	  System UUID:                5c4735c4-9e97-43ca-a4c1-b261d8ba322e
	  Boot ID:                    fcda772d-4207-4ab9-84d8-f9ba5cb81f2f
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-2xchw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-67bdd5bbb4-78b7j          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-6cdb49bbb-qdsk6                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-gvrxv                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-113090                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-4h69s                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-113090             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-113090    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-mhdsf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-113090             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-tqv5x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-jk6md        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-113090 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-113090 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-113090 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-113090 event: Registered Node functional-113090 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-113090 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-113090 event: Registered Node functional-113090 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-113090 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-113090 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-113090 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-113090 event: Registered Node functional-113090 in Controller
	
	
	==> dmesg <==
	[  +0.000782] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000744] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000758] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.005178] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001365] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.645483] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025447] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.034285] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.032948] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +7.141697] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 9 23:47] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[  +1.015721] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[  +2.011802] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[  +4.127509] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[  +8.191113] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[ +16.130221] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[Dec 9 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 85 95 83 05 c3 8e 68 1a 6b fe a8 08 00
	[Dec 9 23:56] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [8153cb42f1180cfe3752af18c0d51895565affc3cf038146ed13d0a7ce61d427] <==
	{"level":"info","ts":"2024-12-09T23:55:11.525666Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-09T23:55:11.525717Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-09T23:55:11.525901Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-09T23:55:11.525935Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-09T23:55:11.583778Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T23:55:11.583849Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T23:55:12.906645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-12-09T23:55:12.906709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-12-09T23:55:12.906750Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-09T23:55:12.906764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-12-09T23:55:12.906770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-12-09T23:55:12.906780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-12-09T23:55:12.906796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-12-09T23:55:12.908328Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-113090 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-09T23:55:12.908337Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T23:55:12.908380Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T23:55:12.908648Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-09T23:55:12.908684Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-09T23:55:12.909253Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:55:12.909348Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:55:12.910014Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-12-09T23:55:12.910172Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-10T00:05:12.928820Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1118}
	{"level":"info","ts":"2024-12-10T00:05:12.949352Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1118,"took":"20.194487ms","hash":2129297598,"current-db-size-bytes":4108288,"current-db-size":"4.1 MB","current-db-size-in-use-bytes":1601536,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-10T00:05:12.949401Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2129297598,"revision":1118,"compact-revision":-1}
	
	
	==> etcd [df8e2e02d65fa35112153c42120cf77b6552226f8873826724340175d340f57c] <==
	{"level":"info","ts":"2024-12-09T23:54:31.298274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-09T23:54:31.298293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-12-09T23:54:31.298308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-12-09T23:54:31.298315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-09T23:54:31.298374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-12-09T23:54:31.298390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-09T23:54:31.299669Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-113090 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-09T23:54:31.299703Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T23:54:31.299729Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T23:54:31.299879Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-09T23:54:31.299904Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-09T23:54:31.300642Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:54:31.300923Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:54:31.301542Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-12-09T23:54:31.301694Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-09T23:54:56.878204Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-12-09T23:54:56.878280Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-113090","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-12-09T23:54:56.878375Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-09T23:54:56.878496Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-09T23:54:56.899563Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-09T23:54:56.899647Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-09T23:54:56.899743Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-12-09T23:54:56.901826Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-09T23:54:56.901923Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-09T23:54:56.901957Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-113090","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 00:06:00 up 48 min,  0 users,  load average: 0.19, 0.27, 0.33
	Linux functional-113090 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [729152f9155d3d781517f021dd76870c767191bce065dad42e7a24c6e2bc3627] <==
	I1210 00:03:55.820989       1 main.go:301] handling current node
	I1210 00:04:05.819594       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 00:04:05.819629       1 main.go:301] handling current node
	I1210 00:04:15.812715       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 00:04:15.812749       1 main.go:301] handling current node
	I1210 00:04:25.819612       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 00:04:25.819654       1 main.go:301] handling current node
	I1210 00:04:35.819639       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 00:04:35.819672       1 main.go:301] handling current node
	I1210 00:04:45.812625       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 00:04:45.812668       1 main.go:301] handling current node
	I1210 00:04:55.819591       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 00:04:55.819631       1 main.go:301] handling current node
	I1210 00:05:05.815639       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 00:05:05.815674       1 main.go:301] handling current node
	I1210 00:05:15.812726       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 00:05:15.812774       1 main.go:301] handling current node
	I1210 00:05:25.822104       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 00:05:25.822149       1 main.go:301] handling current node
	I1210 00:05:35.818118       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 00:05:35.818154       1 main.go:301] handling current node
	I1210 00:05:45.814821       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 00:05:45.814863       1 main.go:301] handling current node
	I1210 00:05:55.820848       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1210 00:05:55.820896       1 main.go:301] handling current node
	
	
	==> kindnet [f517fec5533ac594725a870de3726b864cf3e38dbd8caf8b97971a955d936639] <==
	I1209 23:54:29.884107       1 controller.go:365] Waiting for informer caches to sync
	I1209 23:54:29.884117       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	W1209 23:54:29.983848       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W1209 23:54:29.983874       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W1209 23:54:29.983800       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E1209 23:54:29.983992       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError"
	E1209 23:54:29.984003       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError"
	E1209 23:54:29.984016       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError"
	W1209 23:54:29.984667       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E1209 23:54:29.984762       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError"
	W1209 23:54:32.397579       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: nodes is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "kindnet" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1209 23:54:32.397696       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	W1209 23:54:32.397786       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "kindnet" not found]
	E1209 23:54:32.397865       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found]" logger="UnhandledError"
	W1209 23:54:32.397893       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "kindnet" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1209 23:54:32.398025       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError"
	W1209 23:54:32.397962       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "kindnet" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1209 23:54:32.398065       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"kindnet\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	I1209 23:54:35.384662       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1209 23:54:35.384691       1 metrics.go:61] Registering metrics
	I1209 23:54:35.384734       1 controller.go:401] Syncing nftables rules
	I1209 23:54:39.884705       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:54:39.884776       1 main.go:301] handling current node
	I1209 23:54:49.884657       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1209 23:54:49.884689       1 main.go:301] handling current node
	
	
	==> kube-apiserver [224c9d630866c1e526915824b1b6ffe1df8f2bc1c20afb71c898d05564a91ffd] <==
	E1209 23:55:13.994110       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1209 23:55:14.033792       1 shared_informer.go:320] Caches are synced for configmaps
	I1209 23:55:14.047399       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1209 23:55:14.053607       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1209 23:55:14.053636       1 policy_source.go:224] refreshing policies
	I1209 23:55:14.110090       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 23:55:14.835999       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1209 23:55:15.739089       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1209 23:55:15.830529       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1209 23:55:15.839806       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1209 23:55:15.892784       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 23:55:15.899708       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 23:55:31.723893       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.104.239"}
	I1209 23:55:31.732957       1 controller.go:615] quota admission added evaluator for: endpoints
	I1209 23:55:31.733316       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 23:55:36.137691       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1209 23:55:36.277086       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.49.185"}
	I1209 23:55:38.006155       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.7.188"}
	I1209 23:55:38.511085       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.193.102"}
	I1209 23:55:48.307871       1 controller.go:615] quota admission added evaluator for: namespaces
	I1209 23:55:48.549222       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.37.191"}
	I1209 23:55:48.596178       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.3.217"}
	E1209 23:55:54.773536       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56888: use of closed network connection
	I1209 23:55:58.693417       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.105.117.211"}
	E1209 23:56:03.682948       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56940: use of closed network connection
	
	
	==> kube-controller-manager [0ad307862ee371bf84feb1f06c4aa74cfd611da2238cfa566649acda1d313f15] <==
	I1209 23:54:35.868388       1 shared_informer.go:320] Caches are synced for disruption
	I1209 23:54:35.873086       1 shared_informer.go:320] Caches are synced for resource quota
	I1209 23:54:35.876517       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="212.617722ms"
	I1209 23:54:35.876747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="83.926µs"
	I1209 23:54:35.878554       1 shared_informer.go:320] Caches are synced for TTL
	I1209 23:54:35.886994       1 shared_informer.go:320] Caches are synced for node
	I1209 23:54:35.887048       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1209 23:54:35.887077       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1209 23:54:35.887087       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1209 23:54:35.887091       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1209 23:54:35.887186       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-113090"
	I1209 23:54:35.894142       1 shared_informer.go:320] Caches are synced for taint
	I1209 23:54:35.894217       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1209 23:54:35.894308       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-113090"
	I1209 23:54:35.894355       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1209 23:54:35.917616       1 shared_informer.go:320] Caches are synced for daemon sets
	I1209 23:54:35.917625       1 shared_informer.go:320] Caches are synced for persistent volume
	I1209 23:54:35.918330       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1209 23:54:36.281716       1 shared_informer.go:320] Caches are synced for garbage collector
	I1209 23:54:36.317605       1 shared_informer.go:320] Caches are synced for garbage collector
	I1209 23:54:36.317643       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1209 23:54:38.426630       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="78.418µs"
	I1209 23:54:38.443444       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.853647ms"
	I1209 23:54:38.443597       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="108.747µs"
	I1209 23:54:39.707834       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-113090"
	
	
	==> kube-controller-manager [e4e157e4f63be2fb6de047b92cd8c72c0edf645a2ba3362da5bef8fa7df12ec3] <==
	I1209 23:55:48.506078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="69.792µs"
	I1209 23:55:48.506460       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="69.676µs"
	I1209 23:55:48.528064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="61.904µs"
	I1209 23:55:54.151157       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.794546ms"
	I1209 23:55:54.151268       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="65.833µs"
	I1209 23:55:56.160821       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.088744ms"
	I1209 23:55:56.160912       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="46.629µs"
	I1209 23:55:58.797767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="62.886372ms"
	I1209 23:55:58.809348       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="11.456399ms"
	I1209 23:55:58.809444       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="54.485µs"
	I1209 23:55:58.812887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="59.929µs"
	I1209 23:56:15.362639       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-113090"
	I1209 23:56:30.233284       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="78.034µs"
	I1209 23:56:43.906079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="81.693µs"
	I1209 23:57:28.905576       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="70.798µs"
	I1209 23:57:43.906445       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="85.791µs"
	I1209 23:58:27.905479       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="102.786µs"
	I1209 23:58:42.904396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="96.453µs"
	I1209 23:59:38.906950       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="75.64µs"
	I1209 23:59:49.905050       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="146.646µs"
	I1210 00:01:21.889725       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-113090"
	I1210 00:01:40.906871       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="109.416µs"
	I1210 00:01:54.904739       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="137.769µs"
	I1210 00:04:58.907827       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="105.237µs"
	I1210 00:05:12.905979       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="114.424µs"
	
	
	==> kube-proxy [6857dfd57a5b5e86b9d83e8bc600687c45b90876f0289719452b3ad6f0c6ca35] <==
	I1209 23:55:15.314712       1 server_linux.go:66] "Using iptables proxy"
	I1209 23:55:15.439175       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1209 23:55:15.439330       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 23:55:15.461338       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 23:55:15.461409       1 server_linux.go:169] "Using iptables Proxier"
	I1209 23:55:15.463170       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 23:55:15.463467       1 server.go:483] "Version info" version="v1.31.2"
	I1209 23:55:15.463497       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:55:15.464754       1 config.go:105] "Starting endpoint slice config controller"
	I1209 23:55:15.464823       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 23:55:15.464852       1 config.go:199] "Starting service config controller"
	I1209 23:55:15.464856       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 23:55:15.464770       1 config.go:328] "Starting node config controller"
	I1209 23:55:15.464868       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 23:55:15.565542       1 shared_informer.go:320] Caches are synced for node config
	I1209 23:55:15.565583       1 shared_informer.go:320] Caches are synced for service config
	I1209 23:55:15.565588       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f946922fdd89a3e9fa5fcb1c6563dd7338d714d163ba85a7366ec131f5c542a7] <==
	I1209 23:54:40.283060       1 server_linux.go:66] "Using iptables proxy"
	I1209 23:54:40.411718       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1209 23:54:40.411782       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 23:54:40.435982       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1209 23:54:40.436045       1 server_linux.go:169] "Using iptables Proxier"
	I1209 23:54:40.438084       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 23:54:40.438487       1 server.go:483] "Version info" version="v1.31.2"
	I1209 23:54:40.438513       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:54:40.439671       1 config.go:199] "Starting service config controller"
	I1209 23:54:40.439718       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 23:54:40.439720       1 config.go:328] "Starting node config controller"
	I1209 23:54:40.439749       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 23:54:40.439684       1 config.go:105] "Starting endpoint slice config controller"
	I1209 23:54:40.439791       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 23:54:40.540069       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 23:54:40.540112       1 shared_informer.go:320] Caches are synced for service config
	I1209 23:54:40.540210       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d362f81d8b728570a053f7805ace062c199f6001f7cdecd5adf87162827e8a45] <==
	I1209 23:54:30.858880       1 serving.go:386] Generated self-signed cert in-memory
	W1209 23:54:32.313467       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 23:54:32.313504       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1209 23:54:32.313517       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 23:54:32.313527       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 23:54:32.402460       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1209 23:54:32.402587       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:54:32.405475       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 23:54:32.405521       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 23:54:32.406263       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1209 23:54:32.484599       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 23:54:32.506000       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1209 23:54:56.877896       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d3f50b6c67aab77d9f18cd0c31b43c3f64765d39a7cc15bc67d065303830002e] <==
	I1209 23:55:12.289879       1 serving.go:386] Generated self-signed cert in-memory
	W1209 23:55:13.896747       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 23:55:13.896782       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 23:55:13.896794       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 23:55:13.896804       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 23:55:13.996739       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1209 23:55:13.996771       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:55:13.998497       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 23:55:13.998531       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 23:55:13.998673       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1209 23:55:13.998724       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 23:55:14.099362       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 00:04:44 functional-113090 kubelet[5846]: E1210 00:04:44.985340    5846 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 10 00:04:44 functional-113090 kubelet[5846]: E1210 00:04:44.985407    5846 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 10 00:04:44 functional-113090 kubelet[5846]: E1210 00:04:44.985515    5846 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mt5pg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-6cdb49bbb-qdsk6_default(bbdea9c7-f8a5-498f-a3a8-e10ee2a31a61): ErrImagePull: initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 10 00:04:44 functional-113090 kubelet[5846]: E1210 00:04:44.986760    5846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"initializing source docker://mysql:5.7: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-6cdb49bbb-qdsk6" podUID="bbdea9c7-f8a5-498f-a3a8-e10ee2a31a61"
	Dec 10 00:04:51 functional-113090 kubelet[5846]: E1210 00:04:51.006694    5846 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789091006475915,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:305873,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:04:51 functional-113090 kubelet[5846]: E1210 00:04:51.006735    5846 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789091006475915,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:305873,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:04:58 functional-113090 kubelet[5846]: E1210 00:04:58.897167    5846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-qdsk6" podUID="bbdea9c7-f8a5-498f-a3a8-e10ee2a31a61"
	Dec 10 00:05:01 functional-113090 kubelet[5846]: E1210 00:05:01.010337    5846 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789101009938987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:305873,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:05:01 functional-113090 kubelet[5846]: E1210 00:05:01.010385    5846 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789101009938987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:305873,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:05:10 functional-113090 kubelet[5846]: E1210 00:05:10.910235    5846 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/afe05cf6c38af958612b6f75979c02308d9a71092bad4d4f38a2a3c772f45f8f, memory: /docker/afe05cf6c38af958612b6f75979c02308d9a71092bad4d4f38a2a3c772f45f8f/system.slice/kubelet.service"
	Dec 10 00:05:11 functional-113090 kubelet[5846]: E1210 00:05:11.012783    5846 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789111012504678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:305873,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:05:11 functional-113090 kubelet[5846]: E1210 00:05:11.012823    5846 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789111012504678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:305873,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:05:12 functional-113090 kubelet[5846]: E1210 00:05:12.897215    5846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-qdsk6" podUID="bbdea9c7-f8a5-498f-a3a8-e10ee2a31a61"
	Dec 10 00:05:21 functional-113090 kubelet[5846]: E1210 00:05:21.014341    5846 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789121014104422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:305873,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:05:21 functional-113090 kubelet[5846]: E1210 00:05:21.014379    5846 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789121014104422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:305873,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:05:23 functional-113090 kubelet[5846]: E1210 00:05:23.896417    5846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-qdsk6" podUID="bbdea9c7-f8a5-498f-a3a8-e10ee2a31a61"
	Dec 10 00:05:31 functional-113090 kubelet[5846]: E1210 00:05:31.016768    5846 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789131016531846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:305873,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:05:31 functional-113090 kubelet[5846]: E1210 00:05:31.016809    5846 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789131016531846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:305873,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:05:34 functional-113090 kubelet[5846]: E1210 00:05:34.897171    5846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-qdsk6" podUID="bbdea9c7-f8a5-498f-a3a8-e10ee2a31a61"
	Dec 10 00:05:41 functional-113090 kubelet[5846]: E1210 00:05:41.018737    5846 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789141018525820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:305873,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:05:41 functional-113090 kubelet[5846]: E1210 00:05:41.018785    5846 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789141018525820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:305873,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:05:46 functional-113090 kubelet[5846]: E1210 00:05:46.897102    5846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-qdsk6" podUID="bbdea9c7-f8a5-498f-a3a8-e10ee2a31a61"
	Dec 10 00:05:51 functional-113090 kubelet[5846]: E1210 00:05:51.020312    5846 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789151020084610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:305873,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:05:51 functional-113090 kubelet[5846]: E1210 00:05:51.020367    5846 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789151020084610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:305873,},InodesUsed:&UInt64Value{Value:139,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:05:57 functional-113090 kubelet[5846]: E1210 00:05:57.897460    5846 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-qdsk6" podUID="bbdea9c7-f8a5-498f-a3a8-e10ee2a31a61"
	
	
	==> kubernetes-dashboard [4df9cb9d81020b2a2b34c0580e9feee8499de1e68031cba7aec90431c2aac978] <==
	2024/12/09 23:55:53 Using namespace: kubernetes-dashboard
	2024/12/09 23:55:53 Using in-cluster config to connect to apiserver
	2024/12/09 23:55:53 Using secret token for csrf signing
	2024/12/09 23:55:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/12/09 23:55:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/12/09 23:55:53 Successful initial request to the apiserver, version: v1.31.2
	2024/12/09 23:55:53 Generating JWE encryption key
	2024/12/09 23:55:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/12/09 23:55:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/12/09 23:55:53 Initializing JWE encryption key from synchronized object
	2024/12/09 23:55:53 Creating in-cluster Sidecar client
	2024/12/09 23:55:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/09 23:55:53 Serving insecurely on HTTP port: 9090
	2024/12/09 23:56:23 Successful request to sidecar
	2024/12/09 23:55:53 Starting overwatch
	
	
	==> storage-provisioner [031b5d00932ab6d16f35db3e85ce5f57abfe956cb65d5e4618dc9253e548236a] <==
	I1209 23:54:55.256289       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:54:55.263130       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:54:55.263345       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [eccc5406931039c41852298438bf88190dd26b7ed9465ea0ab47fde4bfafb24f] <==
	I1209 23:55:15.223650       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:55:15.291792       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:55:15.291999       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 23:55:32.691459       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 23:55:32.691639       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-113090_7e982a37-533a-455b-9a21-ec60a60e208b!
	I1209 23:55:32.691592       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"20c085e5-27b0-49e7-93e6-857e5def89a2", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-113090_7e982a37-533a-455b-9a21-ec60a60e208b became leader
	I1209 23:55:32.791970       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-113090_7e982a37-533a-455b-9a21-ec60a60e208b!
	I1209 23:55:42.498143       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1209 23:55:42.498364       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f28565e2-f199-4a25-b596-577cdf78be44", APIVersion:"v1", ResourceVersion:"727", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1209 23:55:42.498199       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    cfd3cf90-0175-4a43-af85-665de160d7ed 349 0 2024-12-09 23:53:54 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-12-09 23:53:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f28565e2-f199-4a25-b596-577cdf78be44 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f28565e2-f199-4a25-b596-577cdf78be44 727 0 2024-12-09 23:55:42 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-12-09 23:55:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-12-09 23:55:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1209 23:55:42.498593       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f28565e2-f199-4a25-b596-577cdf78be44" provisioned
	I1209 23:55:42.498622       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1209 23:55:42.498629       1 volume_store.go:212] Trying to save persistentvolume "pvc-f28565e2-f199-4a25-b596-577cdf78be44"
	I1209 23:55:42.508364       1 volume_store.go:219] persistentvolume "pvc-f28565e2-f199-4a25-b596-577cdf78be44" saved
	I1209 23:55:42.508517       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f28565e2-f199-4a25-b596-577cdf78be44", APIVersion:"v1", ResourceVersion:"727", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f28565e2-f199-4a25-b596-577cdf78be44
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-113090 -n functional-113090
helpers_test.go:261: (dbg) Run:  kubectl --context functional-113090 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-qdsk6
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-113090 describe pod busybox-mount mysql-6cdb49bbb-qdsk6
helpers_test.go:282: (dbg) kubectl --context functional-113090 describe pod busybox-mount mysql-6cdb49bbb-qdsk6:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-113090/192.168.49.2
	Start Time:       Mon, 09 Dec 2024 23:55:52 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://3c679164adeecbd19311a7920eb606ea6bc08725db44855b920b56fc2a55f51f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 09 Dec 2024 23:55:56 +0000
	      Finished:     Mon, 09 Dec 2024 23:55:56 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rlnmg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-rlnmg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-113090
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.042s (3.035s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-qdsk6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-113090/192.168.49.2
	Start Time:       Mon, 09 Dec 2024 23:55:58 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mt5pg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mt5pg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-6cdb49bbb-qdsk6 to functional-113090
	  Normal   Pulling    7m7s (x4 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     6m36s (x4 over 9m32s)  kubelet            Error: ErrImagePull
	  Warning  Failed     6m36s (x2 over 8m47s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m12s (x6 over 9m31s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    6m (x7 over 9m31s)     kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4m35s (x3 over 9m32s)  kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.80s)

                                                
                                    

Test pass (301/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.01
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 5.41
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.06
18 TestDownloadOnly/v1.31.2/DeleteAll 0.21
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.07
21 TestBinaryMirror 0.75
22 TestOffline 59.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 152.72
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 7.46
35 TestAddons/parallel/Registry 14.18
37 TestAddons/parallel/InspektorGadget 11.69
40 TestAddons/parallel/CSI 46.04
41 TestAddons/parallel/Headlamp 17.44
42 TestAddons/parallel/CloudSpanner 6.48
43 TestAddons/parallel/LocalPath 56.84
44 TestAddons/parallel/NvidiaDevicePlugin 6.46
45 TestAddons/parallel/Yakd 10.86
46 TestAddons/parallel/AmdGpuDevicePlugin 5.61
47 TestAddons/StoppedEnableDisable 12.03
48 TestCertOptions 29.46
49 TestCertExpiration 224.01
51 TestForceSystemdFlag 28.18
52 TestForceSystemdEnv 34.48
54 TestKVMDriverInstallOrUpdate 3.69
58 TestErrorSpam/setup 21.16
59 TestErrorSpam/start 0.58
60 TestErrorSpam/status 0.85
61 TestErrorSpam/pause 1.48
62 TestErrorSpam/unpause 1.72
63 TestErrorSpam/stop 1.36
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 42.82
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 38.65
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.08
75 TestFunctional/serial/CacheCmd/cache/add_local 1.28
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 33.17
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.39
86 TestFunctional/serial/LogsFileCmd 1.4
87 TestFunctional/serial/InvalidService 4.52
89 TestFunctional/parallel/ConfigCmd 0.41
90 TestFunctional/parallel/DashboardCmd 10.75
91 TestFunctional/parallel/DryRun 0.43
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.25
97 TestFunctional/parallel/ServiceCmdConnect 7.76
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 26.37
101 TestFunctional/parallel/SSHCmd 0.63
102 TestFunctional/parallel/CpCmd 2.14
104 TestFunctional/parallel/FileSync 0.27
105 TestFunctional/parallel/CertSync 1.6
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
113 TestFunctional/parallel/License 0.22
114 TestFunctional/parallel/ServiceCmd/DeployApp 9.22
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.31
120 TestFunctional/parallel/ServiceCmd/List 0.61
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
123 TestFunctional/parallel/ServiceCmd/Format 0.38
124 TestFunctional/parallel/ServiceCmd/URL 0.47
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
126 TestFunctional/parallel/ProfileCmd/profile_list 0.56
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
128 TestFunctional/parallel/Version/short 0.06
129 TestFunctional/parallel/Version/components 0.5
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
134 TestFunctional/parallel/ImageCommands/ImageBuild 2.18
135 TestFunctional/parallel/ImageCommands/Setup 0.91
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
142 TestFunctional/parallel/MountCmd/any-port 9.79
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.08
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.32
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
153 TestFunctional/parallel/MountCmd/specific-port 1.73
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.69
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 106.45
162 TestMultiControlPlane/serial/DeployApp 3.9
163 TestMultiControlPlane/serial/PingHostFromPods 1.02
164 TestMultiControlPlane/serial/AddWorkerNode 33.01
165 TestMultiControlPlane/serial/NodeLabels 0.06
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
167 TestMultiControlPlane/serial/CopyFile 15.99
168 TestMultiControlPlane/serial/StopSecondaryNode 12.5
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
170 TestMultiControlPlane/serial/RestartSecondaryNode 25.11
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.02
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 166.67
173 TestMultiControlPlane/serial/DeleteSecondaryNode 11.34
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
175 TestMultiControlPlane/serial/StopCluster 35.6
176 TestMultiControlPlane/serial/RestartCluster 119.02
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
178 TestMultiControlPlane/serial/AddSecondaryNode 41.55
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
183 TestJSONOutput/start/Command 39.91
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.66
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.6
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.77
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
208 TestKicCustomNetwork/create_custom_network 29.59
209 TestKicCustomNetwork/use_default_bridge_network 23.2
210 TestKicExistingNetwork 25.69
211 TestKicCustomSubnet 26.94
212 TestKicStaticIP 26.69
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 49.36
217 TestMountStart/serial/StartWithMountFirst 5.39
218 TestMountStart/serial/VerifyMountFirst 0.25
219 TestMountStart/serial/StartWithMountSecond 5.39
220 TestMountStart/serial/VerifyMountSecond 0.24
221 TestMountStart/serial/DeleteFirst 1.59
222 TestMountStart/serial/VerifyMountPostDelete 0.25
223 TestMountStart/serial/Stop 1.18
224 TestMountStart/serial/RestartStopped 7.22
225 TestMountStart/serial/VerifyMountPostStop 0.25
228 TestMultiNode/serial/FreshStart2Nodes 70.63
229 TestMultiNode/serial/DeployApp2Nodes 4.93
230 TestMultiNode/serial/PingHostFrom2Pods 0.71
231 TestMultiNode/serial/AddNode 27.54
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.61
234 TestMultiNode/serial/CopyFile 8.94
235 TestMultiNode/serial/StopNode 2.1
236 TestMultiNode/serial/StartAfterStop 8.94
237 TestMultiNode/serial/RestartKeepsNodes 100.53
238 TestMultiNode/serial/DeleteNode 5.21
239 TestMultiNode/serial/StopMultiNode 23.7
240 TestMultiNode/serial/RestartMultiNode 47.23
241 TestMultiNode/serial/ValidateNameConflict 25.98
246 TestPreload 105.99
248 TestScheduledStopUnix 98.8
251 TestInsufficientStorage 9.87
252 TestRunningBinaryUpgrade 59.84
254 TestKubernetesUpgrade 354.12
255 TestMissingContainerUpgrade 132.12
257 TestStoppedBinaryUpgrade/Setup 0.43
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
259 TestNoKubernetes/serial/StartWithK8s 32.64
260 TestStoppedBinaryUpgrade/Upgrade 95.49
261 TestNoKubernetes/serial/StartWithStopK8s 14
262 TestNoKubernetes/serial/Start 5.01
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
264 TestNoKubernetes/serial/ProfileList 7.22
265 TestNoKubernetes/serial/Stop 1.26
266 TestNoKubernetes/serial/StartNoArgs 7.29
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
276 TestPause/serial/Start 48.2
277 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
278 TestPause/serial/SecondStartNoReconfiguration 40.23
286 TestNetworkPlugins/group/false 3.85
290 TestPause/serial/Pause 0.78
291 TestPause/serial/VerifyStatus 0.41
292 TestPause/serial/Unpause 0.86
293 TestPause/serial/PauseAgain 0.84
294 TestPause/serial/DeletePaused 5.11
295 TestPause/serial/VerifyDeletedResources 21.17
297 TestStartStop/group/old-k8s-version/serial/FirstStart 135.41
299 TestStartStop/group/no-preload/serial/FirstStart 54.74
300 TestStartStop/group/no-preload/serial/DeployApp 8.37
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.92
302 TestStartStop/group/no-preload/serial/Stop 11.83
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
304 TestStartStop/group/no-preload/serial/SecondStart 262.93
305 TestStartStop/group/old-k8s-version/serial/DeployApp 8.39
306 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.83
307 TestStartStop/group/old-k8s-version/serial/Stop 11.87
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
309 TestStartStop/group/old-k8s-version/serial/SecondStart 132
311 TestStartStop/group/embed-certs/serial/FirstStart 45.09
312 TestStartStop/group/embed-certs/serial/DeployApp 7.31
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
314 TestStartStop/group/embed-certs/serial/Stop 11.94
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
316 TestStartStop/group/embed-certs/serial/SecondStart 262.21
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.9
319 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
321 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
322 TestStartStop/group/old-k8s-version/serial/Pause 2.56
324 TestStartStop/group/newest-cni/serial/FirstStart 29.17
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.25
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.88
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 276.69
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.94
332 TestStartStop/group/newest-cni/serial/Stop 1.2
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
334 TestStartStop/group/newest-cni/serial/SecondStart 13.66
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
338 TestStartStop/group/newest-cni/serial/Pause 3.24
339 TestNetworkPlugins/group/auto/Start 42.36
340 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
341 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
342 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
343 TestStartStop/group/no-preload/serial/Pause 2.82
344 TestNetworkPlugins/group/kindnet/Start 40.65
345 TestNetworkPlugins/group/auto/KubeletFlags 0.27
346 TestNetworkPlugins/group/auto/NetCatPod 9.21
347 TestNetworkPlugins/group/auto/DNS 0.12
348 TestNetworkPlugins/group/auto/Localhost 0.11
349 TestNetworkPlugins/group/auto/HairPin 0.1
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
352 TestNetworkPlugins/group/kindnet/NetCatPod 10.21
353 TestNetworkPlugins/group/calico/Start 59.16
354 TestNetworkPlugins/group/kindnet/DNS 0.13
355 TestNetworkPlugins/group/kindnet/Localhost 0.11
356 TestNetworkPlugins/group/kindnet/HairPin 0.12
357 TestNetworkPlugins/group/custom-flannel/Start 48.19
358 TestNetworkPlugins/group/calico/ControllerPod 6.01
359 TestNetworkPlugins/group/calico/KubeletFlags 0.26
360 TestNetworkPlugins/group/calico/NetCatPod 9.17
361 TestNetworkPlugins/group/calico/DNS 0.12
362 TestNetworkPlugins/group/calico/Localhost 0.1
363 TestNetworkPlugins/group/calico/HairPin 0.11
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.18
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
367 TestNetworkPlugins/group/custom-flannel/DNS 0.14
368 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
369 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
370 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
371 TestNetworkPlugins/group/enable-default-cni/Start 74.05
372 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
373 TestStartStop/group/embed-certs/serial/Pause 3.34
374 TestNetworkPlugins/group/flannel/Start 51.14
375 TestNetworkPlugins/group/bridge/Start 34.25
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
377 TestNetworkPlugins/group/bridge/NetCatPod 10.2
378 TestNetworkPlugins/group/flannel/ControllerPod 6.01
379 TestNetworkPlugins/group/bridge/DNS 0.12
380 TestNetworkPlugins/group/bridge/Localhost 0.1
381 TestNetworkPlugins/group/bridge/HairPin 0.11
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
383 TestNetworkPlugins/group/flannel/NetCatPod 10.21
384 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
385 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.19
386 TestNetworkPlugins/group/flannel/DNS 0.18
387 TestNetworkPlugins/group/flannel/Localhost 0.13
388 TestNetworkPlugins/group/flannel/HairPin 0.13
389 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
390 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
391 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
392 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
393 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
394 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
395 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.73
x
+
TestDownloadOnly/v1.20.0/json-events (7.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-060591 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-060591 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.011857174s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1209 23:43:33.905379   15396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1209 23:43:33.905478   15396 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-8617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-060591
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-060591: exit status 85 (63.92658ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-060591 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |          |
	|         | -p download-only-060591        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:43:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:43:26.935591   15407 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:43:26.935726   15407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:26.935736   15407 out.go:358] Setting ErrFile to fd 2...
	I1209 23:43:26.935741   15407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:26.935956   15407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
	W1209 23:43:26.936095   15407 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20062-8617/.minikube/config/config.json: open /home/jenkins/minikube-integration/20062-8617/.minikube/config/config.json: no such file or directory
	I1209 23:43:26.936688   15407 out.go:352] Setting JSON to true
	I1209 23:43:26.937598   15407 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1554,"bootTime":1733786253,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:43:26.937656   15407 start.go:139] virtualization: kvm guest
	I1209 23:43:26.940593   15407 out.go:97] [download-only-060591] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1209 23:43:26.940725   15407 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20062-8617/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 23:43:26.940764   15407 notify.go:220] Checking for updates...
	I1209 23:43:26.942223   15407 out.go:169] MINIKUBE_LOCATION=20062
	I1209 23:43:26.943880   15407 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:43:26.945440   15407 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig
	I1209 23:43:26.947031   15407 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube
	I1209 23:43:26.948666   15407 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 23:43:26.951302   15407 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 23:43:26.951625   15407 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:43:26.972379   15407 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:43:26.972486   15407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:43:27.345276   15407 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2024-12-09 23:43:27.336752344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:43:27.345391   15407 docker.go:318] overlay module found
	I1209 23:43:27.347340   15407 out.go:97] Using the docker driver based on user configuration
	I1209 23:43:27.347372   15407 start.go:297] selected driver: docker
	I1209 23:43:27.347377   15407 start.go:901] validating driver "docker" against <nil>
	I1209 23:43:27.347484   15407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:43:27.397280   15407 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:48 SystemTime:2024-12-09 23:43:27.388715308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:43:27.397443   15407 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:43:27.397960   15407 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1209 23:43:27.398155   15407 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 23:43:27.400028   15407 out.go:169] Using Docker driver with root privileges
	I1209 23:43:27.401326   15407 cni.go:84] Creating CNI manager for ""
	I1209 23:43:27.401396   15407 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:43:27.401407   15407 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 23:43:27.401497   15407 start.go:340] cluster config:
	{Name:download-only-060591 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-060591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:43:27.403057   15407 out.go:97] Starting "download-only-060591" primary control-plane node in "download-only-060591" cluster
	I1209 23:43:27.403073   15407 cache.go:121] Beginning downloading kic base image for docker with crio
	I1209 23:43:27.404308   15407 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1209 23:43:27.404347   15407 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:43:27.404428   15407 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 23:43:27.421092   15407 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 23:43:27.421276   15407 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1209 23:43:27.421375   15407 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 23:43:27.470997   15407 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 23:43:27.471049   15407 cache.go:56] Caching tarball of preloaded images
	I1209 23:43:27.471223   15407 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:43:27.473187   15407 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1209 23:43:27.473207   15407 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1209 23:43:27.506240   15407 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20062-8617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 23:43:30.701876   15407 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	
	
	* The control-plane node download-only-060591 host does not exist
	  To start a cluster, run: "minikube start -p download-only-060591"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-060591
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (5.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-694743 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-694743 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.407633896s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (5.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1209 23:43:39.711722   15396 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1209 23:43:39.711767   15396 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-8617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-694743
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-694743: exit status 85 (63.987824ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-060591 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | -p download-only-060591        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| delete  | -p download-only-060591        | download-only-060591 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| start   | -o=json --download-only        | download-only-694743 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | -p download-only-694743        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:43:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:43:34.344514   15758 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:43:34.344627   15758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:34.344640   15758 out.go:358] Setting ErrFile to fd 2...
	I1209 23:43:34.344644   15758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:34.344812   15758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
	I1209 23:43:34.345350   15758 out.go:352] Setting JSON to true
	I1209 23:43:34.346221   15758 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1561,"bootTime":1733786253,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:43:34.346311   15758 start.go:139] virtualization: kvm guest
	I1209 23:43:34.348662   15758 out.go:97] [download-only-694743] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:43:34.348809   15758 notify.go:220] Checking for updates...
	I1209 23:43:34.350594   15758 out.go:169] MINIKUBE_LOCATION=20062
	I1209 23:43:34.352307   15758 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:43:34.353770   15758 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig
	I1209 23:43:34.355247   15758 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube
	I1209 23:43:34.356689   15758 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 23:43:34.359652   15758 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 23:43:34.359885   15758 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:43:34.382174   15758 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:43:34.382281   15758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:43:34.429030   15758 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:46 SystemTime:2024-12-09 23:43:34.420336325 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:43:34.429166   15758 docker.go:318] overlay module found
	I1209 23:43:34.431078   15758 out.go:97] Using the docker driver based on user configuration
	I1209 23:43:34.431103   15758 start.go:297] selected driver: docker
	I1209 23:43:34.431114   15758 start.go:901] validating driver "docker" against <nil>
	I1209 23:43:34.431199   15758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:43:34.477512   15758 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:46 SystemTime:2024-12-09 23:43:34.469195388 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:43:34.477685   15758 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:43:34.478191   15758 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1209 23:43:34.478325   15758 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 23:43:34.480330   15758 out.go:169] Using Docker driver with root privileges
	I1209 23:43:34.481531   15758 cni.go:84] Creating CNI manager for ""
	I1209 23:43:34.481588   15758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1209 23:43:34.481600   15758 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 23:43:34.481654   15758 start.go:340] cluster config:
	{Name:download-only-694743 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-694743 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:43:34.483191   15758 out.go:97] Starting "download-only-694743" primary control-plane node in "download-only-694743" cluster
	I1209 23:43:34.483211   15758 cache.go:121] Beginning downloading kic base image for docker with crio
	I1209 23:43:34.484769   15758 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1209 23:43:34.484802   15758 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:43:34.484906   15758 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1209 23:43:34.501110   15758 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1209 23:43:34.501220   15758 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1209 23:43:34.501234   15758 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1209 23:43:34.501239   15758 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1209 23:43:34.501245   15758 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1209 23:43:34.513337   15758 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 23:43:34.513368   15758 cache.go:56] Caching tarball of preloaded images
	I1209 23:43:34.513561   15758 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:43:34.515624   15758 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1209 23:43:34.515645   15758 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1209 23:43:34.541667   15758 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/20062-8617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 23:43:38.290345   15758 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1209 23:43:38.290435   15758 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20062-8617/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1209 23:43:39.037843   15758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 23:43:39.038216   15758 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/download-only-694743/config.json ...
	I1209 23:43:39.038246   15758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/download-only-694743/config.json: {Name:mk66f06de847edefbc45f3e420e03f61c2cddfee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:43:39.038407   15758 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:43:39.038542   15758 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20062-8617/.minikube/cache/linux/amd64/v1.31.2/kubectl
	
	
	* The control-plane node download-only-694743 host does not exist
	  To start a cluster, run: "minikube start -p download-only-694743"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-694743
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.07s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-926270 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-926270" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-926270
--- PASS: TestDownloadOnlyKic (1.07s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 23:43:41.450226   15396 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-759052 --alsologtostderr --binary-mirror http://127.0.0.1:37197 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-759052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-759052
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestOffline (59.6s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-522272 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-522272 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (57.276621628s)
helpers_test.go:175: Cleaning up "offline-crio-522272" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-522272
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-522272: (2.326081403s)
--- PASS: TestOffline (59.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-701527
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-701527: exit status 85 (51.835689ms)

                                                
                                                
-- stdout --
	* Profile "addons-701527" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-701527"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-701527
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-701527: exit status 85 (55.032347ms)

                                                
                                                
-- stdout --
	* Profile "addons-701527" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-701527"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (152.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-701527 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-701527 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m32.723353753s)
--- PASS: TestAddons/Setup (152.72s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-701527 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-701527 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-701527 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-701527 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e889ff68-f847-492a-a0d6-f3c9a14fe017] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e889ff68-f847-492a-a0d6-f3c9a14fe017] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.004006177s
addons_test.go:633: (dbg) Run:  kubectl --context addons-701527 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-701527 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-701527 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.735368ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-hqlfw" [e0b25e01-7672-4537-ae66-04da6fa6f483] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002939349s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-g2wbp" [1e1d2641-f760-4ea1-9dd2-8579da7521e1] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003687573s
addons_test.go:331: (dbg) Run:  kubectl --context addons-701527 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-701527 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-701527 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.456701097s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 ip
2024/12/09 23:46:44 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.18s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.69s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6c97l" [f5b066f7-cf6a-49d5-ade7-d3af0e7917c3] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003747268s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-701527 addons disable inspektor-gadget --alsologtostderr -v=1: (5.688867547s)
--- PASS: TestAddons/parallel/InspektorGadget (11.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.04s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 23:46:42.759023   15396 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1209 23:46:42.764402   15396 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 23:46:42.764431   15396 kapi.go:107] duration metric: took 5.415685ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.427223ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-701527 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-701527 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [09a4415f-4778-40f9-aa16-d5d95d825098] Pending
helpers_test.go:344: "task-pv-pod" [09a4415f-4778-40f9-aa16-d5d95d825098] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [09a4415f-4778-40f9-aa16-d5d95d825098] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003399915s
addons_test.go:511: (dbg) Run:  kubectl --context addons-701527 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-701527 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-701527 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-701527 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-701527 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-701527 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-701527 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e1ef76ab-fe48-45b9-9fad-5a0f5a2bf984] Pending
helpers_test.go:344: "task-pv-pod-restore" [e1ef76ab-fe48-45b9-9fad-5a0f5a2bf984] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e1ef76ab-fe48-45b9-9fad-5a0f5a2bf984] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.012495604s
addons_test.go:553: (dbg) Run:  kubectl --context addons-701527 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-701527 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-701527 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-701527 addons disable volumesnapshots --alsologtostderr -v=1: (1.093992334s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-701527 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.611546542s)
--- PASS: TestAddons/parallel/CSI (46.04s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-701527 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-qffkn" [840e463b-19ca-48ed-9a9e-30b1298166a3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-qffkn" [840e463b-19ca-48ed-9a9e-30b1298166a3] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.051267558s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-701527 addons disable headlamp --alsologtostderr -v=1: (5.635463646s)
--- PASS: TestAddons/parallel/Headlamp (17.44s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-wtcnn" [122036a5-2851-4771-88e8-1e12f45cfb27] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002949043s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.48s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.84s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-701527 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-701527 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-701527 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [586fe6d0-1990-4a53-bdd8-4e1e5d90833a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [586fe6d0-1990-4a53-bdd8-4e1e5d90833a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [586fe6d0-1990-4a53-bdd8-4e1e5d90833a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.003072994s
addons_test.go:906: (dbg) Run:  kubectl --context addons-701527 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 ssh "cat /opt/local-path-provisioner/pvc-d348a07d-27a8-404f-adfc-4e8b72e76d0a_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-701527 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-701527 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-701527 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.025156432s)
--- PASS: TestAddons/parallel/LocalPath (56.84s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-55d28" [8ebd5f2c-593c-4804-9e9f-91b53ea7fa82] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003044159s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-4l79j" [13941444-5425-4ba4-aff1-b9502cf5f1c9] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003582603s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-701527 addons disable yakd --alsologtostderr -v=1: (5.859798775s)
--- PASS: TestAddons/parallel/Yakd (10.86s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-d2s7j" [d66910fc-8153-4362-b58d-0c34ded7766f] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.004116971s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.03s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-701527
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-701527: (11.781475443s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-701527
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-701527
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-701527
--- PASS: TestAddons/StoppedEnableDisable (12.03s)

                                                
                                    
x
+
TestCertOptions (29.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-966896 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-966896 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.080196072s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-966896 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-966896 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-966896 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-966896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-966896
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-966896: (3.813807252s)
--- PASS: TestCertOptions (29.46s)

                                                
                                    
x
+
TestCertExpiration (224.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-716059 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-716059 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.503375785s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-716059 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-716059 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (15.084943366s)
helpers_test.go:175: Cleaning up "cert-expiration-716059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-716059
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-716059: (2.417988784s)
--- PASS: TestCertExpiration (224.01s)

                                                
                                    
x
+
TestForceSystemdFlag (28.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-591894 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-591894 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.486913325s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-591894 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-591894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-591894
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-591894: (2.425874556s)
--- PASS: TestForceSystemdFlag (28.18s)

                                                
                                    
x
+
TestForceSystemdEnv (34.48s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-363639 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1210 00:31:15.551373   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-363639 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (30.098417883s)
helpers_test.go:175: Cleaning up "force-systemd-env-363639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-363639
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-363639: (4.380482385s)
--- PASS: TestForceSystemdEnv (34.48s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.69s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1210 00:31:34.197304   15396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1210 00:31:34.197444   15396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1210 00:31:34.238060   15396 install.go:62] docker-machine-driver-kvm2: exit status 1
W1210 00:31:34.238555   15396 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1210 00:31:34.238634   15396 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1241513908/001/docker-machine-driver-kvm2
I1210 00:31:34.587708   15396 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1241513908/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020] Decompressors:map[bz2:0xc000125290 gz:0xc000125298 tar:0xc000125240 tar.bz2:0xc000125250 tar.gz:0xc000125260 tar.xz:0xc000125270 tar.zst:0xc000125280 tbz2:0xc000125250 tgz:0xc000125260 txz:0xc000125270 tzst:0xc000125280 xz:0xc0001252a0 zip:0xc0001252b0 zst:0xc0001252a8] Getters:map[file:0xc001a27a70 http:0xc0006c8af0 https:0xc0006c8b90] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1210 00:31:34.587776   15396 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1241513908/001/docker-machine-driver-kvm2
I1210 00:31:36.313962   15396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1210 00:31:36.314112   15396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1210 00:31:36.347753   15396 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1210 00:31:36.347795   15396 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1210 00:31:36.347869   15396 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1210 00:31:36.347906   15396 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1241513908/002/docker-machine-driver-kvm2
I1210 00:31:36.511375   15396 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1241513908/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020] Decompressors:map[bz2:0xc000125290 gz:0xc000125298 tar:0xc000125240 tar.bz2:0xc000125250 tar.gz:0xc000125260 tar.xz:0xc000125270 tar.zst:0xc000125280 tbz2:0xc000125250 tgz:0xc000125260 txz:0xc000125270 tzst:0xc000125280 xz:0xc0001252a0 zip:0xc0001252b0 zst:0xc0001252a8] Getters:map[file:0xc0020c78c0 http:0xc0005da7d0 https:0xc0005da820] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1210 00:31:36.511452   15396 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1241513908/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.69s)

                                                
                                    
x
+
TestErrorSpam/setup (21.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-615478 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-615478 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-615478 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-615478 --driver=docker  --container-runtime=crio: (21.154911291s)
--- PASS: TestErrorSpam/setup (21.16s)

                                                
                                    
x
+
TestErrorSpam/start (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 start --dry-run
--- PASS: TestErrorSpam/start (0.58s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 stop: (1.174028225s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-615478 --log_dir /tmp/nospam-615478 stop
--- PASS: TestErrorSpam/stop (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20062-8617/.minikube/files/etc/test/nested/copy/15396/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.82s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-113090 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-113090 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (42.822020758s)
--- PASS: TestFunctional/serial/StartWithProxy (42.82s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.65s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 23:54:10.054048   15396 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-113090 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-113090 --alsologtostderr -v=8: (38.650125219s)
functional_test.go:663: soft start took 38.650964499s for "functional-113090" cluster.
I1209 23:54:48.704787   15396 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (38.65s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-113090 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-113090 cache add registry.k8s.io/pause:3.3: (1.101897096s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-113090 cache add registry.k8s.io/pause:latest: (1.002858815s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-113090 /tmp/TestFunctionalserialCacheCmdcacheadd_local1722870178/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 cache add minikube-local-cache-test:functional-113090
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 cache delete minikube-local-cache-test:functional-113090
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-113090
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113090 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (262.939213ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 kubectl -- --context functional-113090 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-113090 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-113090 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-113090 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.172532181s)
functional_test.go:761: restart took 33.172655803s for "functional-113090" cluster.
I1209 23:55:28.680907   15396 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (33.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-113090 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-113090 logs: (1.387631202s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 logs --file /tmp/TestFunctionalserialLogsFileCmd4175792657/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-113090 logs --file /tmp/TestFunctionalserialLogsFileCmd4175792657/001/logs.txt: (1.402185519s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.52s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-113090 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-113090
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-113090: exit status 115 (325.856075ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31081 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-113090 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-113090 delete -f testdata/invalidsvc.yaml: (1.008160157s)
--- PASS: TestFunctional/serial/InvalidService (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113090 config get cpus: exit status 14 (67.943349ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113090 config get cpus: exit status 14 (70.366842ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-113090 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-113090 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 54577: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.75s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-113090 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-113090 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (188.236582ms)

                                                
                                                
-- stdout --
	* [functional-113090] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:55:46.152551   53427 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:55:46.153136   53427 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:46.153157   53427 out.go:358] Setting ErrFile to fd 2...
	I1209 23:55:46.153170   53427 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:46.153672   53427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
	I1209 23:55:46.154262   53427 out.go:352] Setting JSON to false
	I1209 23:55:46.155282   53427 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2293,"bootTime":1733786253,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:55:46.155388   53427 start.go:139] virtualization: kvm guest
	I1209 23:55:46.157682   53427 out.go:177] * [functional-113090] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:55:46.159110   53427 notify.go:220] Checking for updates...
	I1209 23:55:46.159118   53427 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 23:55:46.160621   53427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:55:46.162009   53427 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig
	I1209 23:55:46.163265   53427 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube
	I1209 23:55:46.164563   53427 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:55:46.165858   53427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:55:46.167767   53427 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:55:46.168417   53427 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:55:46.193096   53427 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:55:46.193252   53427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:55:46.258358   53427 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-09 23:55:46.243984202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:55:46.258497   53427 docker.go:318] overlay module found
	I1209 23:55:46.261597   53427 out.go:177] * Using the docker driver based on existing profile
	I1209 23:55:46.263284   53427 start.go:297] selected driver: docker
	I1209 23:55:46.263311   53427 start.go:901] validating driver "docker" against &{Name:functional-113090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-113090 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:46.263397   53427 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:55:46.285071   53427 out.go:201] 
	W1209 23:55:46.286855   53427 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 23:55:46.288661   53427 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-113090 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-113090 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-113090 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (183.380594ms)

                                                
                                                
-- stdout --
	* [functional-113090] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 23:55:46.595104   53647 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:55:46.595240   53647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:46.595251   53647 out.go:358] Setting ErrFile to fd 2...
	I1209 23:55:46.595258   53647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:55:46.595615   53647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
	I1209 23:55:46.596253   53647 out.go:352] Setting JSON to false
	I1209 23:55:46.597230   53647 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2294,"bootTime":1733786253,"procs":252,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:55:46.597340   53647 start.go:139] virtualization: kvm guest
	I1209 23:55:46.599257   53647 out.go:177] * [functional-113090] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1209 23:55:46.600767   53647 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 23:55:46.600798   53647 notify.go:220] Checking for updates...
	I1209 23:55:46.603551   53647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:55:46.605284   53647 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig
	I1209 23:55:46.611814   53647 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube
	I1209 23:55:46.613454   53647 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:55:46.614901   53647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:55:46.617073   53647 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:55:46.617579   53647 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:55:46.645254   53647 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1209 23:55:46.645364   53647 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1209 23:55:46.698483   53647 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-09 23:55:46.688895802 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1209 23:55:46.698592   53647 docker.go:318] overlay module found
	I1209 23:55:46.702241   53647 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1209 23:55:46.703923   53647 start.go:297] selected driver: docker
	I1209 23:55:46.703943   53647 start.go:901] validating driver "docker" against &{Name:functional-113090 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-113090 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:55:46.704090   53647 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:55:46.706449   53647 out.go:201] 
	W1209 23:55:46.708099   53647 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 23:55:46.709703   53647 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-113090 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-113090 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-78b7j" [ba98f127-197a-4bc6-b737-f11561dd5b52] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-78b7j" [ba98f127-197a-4bc6-b737-f11561dd5b52] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004127311s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30547
functional_test.go:1675: http://192.168.49.2:30547: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-78b7j

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30547
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [adc31be5-02ad-4c37-a061-ade339782f4d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003651254s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-113090 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-113090 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-113090 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-113090 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c48cc679-78b4-48d8-87bd-42b42ed6ff5a] Pending
helpers_test.go:344: "sp-pod" [c48cc679-78b4-48d8-87bd-42b42ed6ff5a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c48cc679-78b4-48d8-87bd-42b42ed6ff5a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003916081s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-113090 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-113090 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-113090 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d4157dab-7659-4865-a6d1-f9ae60857462] Pending
helpers_test.go:344: "sp-pod" [d4157dab-7659-4865-a6d1-f9ae60857462] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d4157dab-7659-4865-a6d1-f9ae60857462] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004245806s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-113090 exec sp-pod -- ls /tmp/mount
E1209 23:56:15.551609   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:15.558014   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:15.569417   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:15.590819   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:15.632211   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:15.713709   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:15.875396   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:16.197062   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:16.839294   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:18.121212   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:20.683344   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:25.805214   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:36.047128   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:56.528870   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:57:37.491100   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:58:59.412648   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:01:15.551659   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:01:43.254717   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.37s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh -n functional-113090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 cp functional-113090:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2742221859/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh -n functional-113090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh -n functional-113090 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/15396/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "sudo cat /etc/test/nested/copy/15396/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/15396.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "sudo cat /etc/ssl/certs/15396.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/15396.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "sudo cat /usr/share/ca-certificates/15396.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/153962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "sudo cat /etc/ssl/certs/153962.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/153962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "sudo cat /usr/share/ca-certificates/153962.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-113090 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113090 ssh "sudo systemctl is-active docker": exit status 1 (244.598395ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113090 ssh "sudo systemctl is-active containerd": exit status 1 (247.850044ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-113090 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-113090 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-2xchw" [33b743d1-6483-422d-8224-c0ab475832e7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-2xchw" [33b743d1-6483-422d-8224-c0ab475832e7] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.005377881s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-113090 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-113090 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-113090 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 52063: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-113090 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-113090 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-113090 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d57bf7ba-e960-4a89-b64d-75f357df2d80] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d57bf7ba-e960-4a89-b64d-75f357df2d80] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.003748997s
I1209 23:55:50.017680   15396 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 service list -o json
functional_test.go:1494: Took "559.830581ms" to run "out/minikube-linux-amd64 -p functional-113090 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30769
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30769
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "501.691295ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "59.130378ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "320.49018ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "49.117599ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-113090 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-113090
localhost/kicbase/echo-server:functional-113090
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-113090 image ls --format short --alsologtostderr:
I1209 23:55:59.661497   57943 out.go:345] Setting OutFile to fd 1 ...
I1209 23:55:59.661607   57943 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:55:59.661618   57943 out.go:358] Setting ErrFile to fd 2...
I1209 23:55:59.661625   57943 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:55:59.661815   57943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
I1209 23:55:59.662429   57943 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:55:59.662523   57943 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:55:59.662900   57943 cli_runner.go:164] Run: docker container inspect functional-113090 --format={{.State.Status}}
I1209 23:55:59.682880   57943 ssh_runner.go:195] Run: systemctl --version
I1209 23:55:59.682939   57943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-113090
I1209 23:55:59.703193   57943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/functional-113090/id_rsa Username:docker}
I1209 23:55:59.796858   57943 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-113090 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| docker.io/library/nginx                 | alpine             | 91ca84b4f5779 | 54MB   |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| localhost/kicbase/echo-server           | functional-113090  | 9056ab77afb8e | 4.94MB |
| localhost/my-image                      | functional-113090  | cda50f7fa9215 | 1.47MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-113090  | da4274f30d513 | 3.33kB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-113090 image ls --format table --alsologtostderr:
I1209 23:56:02.626573   59503 out.go:345] Setting OutFile to fd 1 ...
I1209 23:56:02.626704   59503 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:56:02.626713   59503 out.go:358] Setting ErrFile to fd 2...
I1209 23:56:02.626717   59503 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:56:02.626912   59503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
I1209 23:56:02.627635   59503 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:56:02.627764   59503 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:56:02.628180   59503 cli_runner.go:164] Run: docker container inspect functional-113090 --format={{.State.Status}}
I1209 23:56:02.646840   59503 ssh_runner.go:195] Run: systemctl --version
I1209 23:56:02.646905   59503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-113090
I1209 23:56:02.668324   59503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/functional-113090/id_rsa Username:docker}
I1209 23:56:02.756143   59503 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-113090 image ls --format json --alsologtostderr:
[{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-113090"],"size":"4943877"},{"id":"182e72185be9b2b0babdef53d2f285
2d6ca90ebae8ff0c121a1014d10ed24f26","repoDigests":["docker.io/library/8bb08aa1ba6e169e5fe2b9f4c0620c3c7aa0ec92731d751d8f36f848dc5da692-tmp@sha256:ef34c54d79e667fde7e4339cbebdbd4f0d6f9bee55f13d5f36c00aa7ade485e9"],"repoTags":[],"size":"1465612"},{"id":"cda50f7fa9215cee51112348a73ce303e5eeafce14f8b88053e2b4717805c539","repoDigests":["localhost/my-image@sha256:05769090c16153a052af4f29fdd0e8eca8bb74eb545f2f8e83ffc4e4250f783a"],"repoTags":["localhost/my-image:functional-113090"],"size":"1468194"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb
8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c12296
5132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e8357
051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"195919252"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d
7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66","repoDigests":["docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4","docker.io/librar
y/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53958631"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"da4274f30d51387e987bafe0839da9943e70e21655e155c12aedbd0e026a265a","repoDigests":["localhost/minikube-local-cache-test@sha256:6964ea376cde1a83ed742c8ca67dd89f9eb2506edc970b6601974d9463eb902d"],"repoTags":["localhost/minikube-local-cache-test:functional-113090"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@
sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-113090 image ls --format json --alsologtostderr:
I1209 23:56:02.409337   59455 out.go:345] Setting OutFile to fd 1 ...
I1209 23:56:02.409467   59455 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:56:02.409476   59455 out.go:358] Setting ErrFile to fd 2...
I1209 23:56:02.409481   59455 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:56:02.409661   59455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
I1209 23:56:02.410324   59455 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:56:02.410424   59455 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:56:02.410813   59455 cli_runner.go:164] Run: docker container inspect functional-113090 --format={{.State.Status}}
I1209 23:56:02.429669   59455 ssh_runner.go:195] Run: systemctl --version
I1209 23:56:02.429720   59455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-113090
I1209 23:56:02.448988   59455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/functional-113090/id_rsa Username:docker}
I1209 23:56:02.540165   59455 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-113090 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-113090
size: "4943877"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 182e72185be9b2b0babdef53d2f2852d6ca90ebae8ff0c121a1014d10ed24f26
repoDigests:
- docker.io/library/8bb08aa1ba6e169e5fe2b9f4c0620c3c7aa0ec92731d751d8f36f848dc5da692-tmp@sha256:ef34c54d79e667fde7e4339cbebdbd4f0d6f9bee55f13d5f36c00aa7ade485e9
repoTags: []
size: "1465612"
- id: 91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
- docker.io/library/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371
repoTags:
- docker.io/library/nginx:alpine
size: "53958631"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: da4274f30d51387e987bafe0839da9943e70e21655e155c12aedbd0e026a265a
repoDigests:
- localhost/minikube-local-cache-test@sha256:6964ea376cde1a83ed742c8ca67dd89f9eb2506edc970b6601974d9463eb902d
repoTags:
- localhost/minikube-local-cache-test:functional-113090
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1462480"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cda50f7fa9215cee51112348a73ce303e5eeafce14f8b88053e2b4717805c539
repoDigests:
- localhost/my-image@sha256:05769090c16153a052af4f29fdd0e8eca8bb74eb545f2f8e83ffc4e4250f783a
repoTags:
- localhost/my-image:functional-113090
size: "1468194"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-113090 image ls --format yaml --alsologtostderr:
I1209 23:56:02.195965   59405 out.go:345] Setting OutFile to fd 1 ...
I1209 23:56:02.196097   59405 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:56:02.196106   59405 out.go:358] Setting ErrFile to fd 2...
I1209 23:56:02.196110   59405 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:56:02.196310   59405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
I1209 23:56:02.196906   59405 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:56:02.197000   59405 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:56:02.197421   59405 cli_runner.go:164] Run: docker container inspect functional-113090 --format={{.State.Status}}
I1209 23:56:02.215263   59405 ssh_runner.go:195] Run: systemctl --version
I1209 23:56:02.215323   59405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-113090
I1209 23:56:02.232939   59405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/functional-113090/id_rsa Username:docker}
I1209 23:56:02.320032   59405 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113090 ssh pgrep buildkitd: exit status 1 (272.211784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image build -t localhost/my-image:functional-113090 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-113090 image build -t localhost/my-image:functional-113090 testdata/build --alsologtostderr: (1.611537029s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-113090 image build -t localhost/my-image:functional-113090 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 182e72185be
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-113090
--> cda50f7fa92
Successfully tagged localhost/my-image:functional-113090
cda50f7fa9215cee51112348a73ce303e5eeafce14f8b88053e2b4717805c539
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-113090 image build -t localhost/my-image:functional-113090 testdata/build --alsologtostderr:
I1209 23:56:00.293757   58416 out.go:345] Setting OutFile to fd 1 ...
I1209 23:56:00.294099   58416 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:56:00.294110   58416 out.go:358] Setting ErrFile to fd 2...
I1209 23:56:00.294114   58416 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:56:00.294331   58416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
I1209 23:56:00.294978   58416 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:56:00.295621   58416 config.go:182] Loaded profile config "functional-113090": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1209 23:56:00.296066   58416 cli_runner.go:164] Run: docker container inspect functional-113090 --format={{.State.Status}}
I1209 23:56:00.314973   58416 ssh_runner.go:195] Run: systemctl --version
I1209 23:56:00.315028   58416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-113090
I1209 23:56:00.334646   58416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/functional-113090/id_rsa Username:docker}
I1209 23:56:00.423881   58416 build_images.go:161] Building image from path: /tmp/build.1250096731.tar
I1209 23:56:00.423952   58416 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 23:56:00.432305   58416 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1250096731.tar
I1209 23:56:00.435733   58416 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1250096731.tar: stat -c "%s %y" /var/lib/minikube/build/build.1250096731.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1250096731.tar': No such file or directory
I1209 23:56:00.435770   58416 ssh_runner.go:362] scp /tmp/build.1250096731.tar --> /var/lib/minikube/build/build.1250096731.tar (3072 bytes)
I1209 23:56:00.458877   58416 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1250096731
I1209 23:56:00.467904   58416 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1250096731 -xf /var/lib/minikube/build/build.1250096731.tar
I1209 23:56:00.476580   58416 crio.go:315] Building image: /var/lib/minikube/build/build.1250096731
I1209 23:56:00.476674   58416 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-113090 /var/lib/minikube/build/build.1250096731 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1209 23:56:01.824435   58416 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-113090 /var/lib/minikube/build/build.1250096731 --cgroup-manager=cgroupfs: (1.347725409s)
I1209 23:56:01.824532   58416 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1250096731
I1209 23:56:01.838032   58416 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1250096731.tar
I1209 23:56:01.846791   58416 build_images.go:217] Built localhost/my-image:functional-113090 from /tmp/build.1250096731.tar
I1209 23:56:01.846829   58416 build_images.go:133] succeeded building to: functional-113090
I1209 23:56:01.846836   58416 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-113090
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-113090 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.7.188 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-113090 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-113090 /tmp/TestFunctionalparallelMountCmdany-port3806823162/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733788550249399676" to /tmp/TestFunctionalparallelMountCmdany-port3806823162/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733788550249399676" to /tmp/TestFunctionalparallelMountCmdany-port3806823162/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733788550249399676" to /tmp/TestFunctionalparallelMountCmdany-port3806823162/001/test-1733788550249399676
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113090 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (284.059174ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 23:55:50.533733   15396 retry.go:31] will retry after 445.789107ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 23:55 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 23:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 23:55 test-1733788550249399676
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh cat /mount-9p/test-1733788550249399676
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-113090 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ae86c39c-6f79-4041-9388-fed9bb695718] Pending
helpers_test.go:344: "busybox-mount" [ae86c39c-6f79-4041-9388-fed9bb695718] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ae86c39c-6f79-4041-9388-fed9bb695718] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ae86c39c-6f79-4041-9388-fed9bb695718] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004405948s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-113090 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-113090 /tmp/TestFunctionalparallelMountCmdany-port3806823162/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image load --daemon kicbase/echo-server:functional-113090 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-113090 image load --daemon kicbase/echo-server:functional-113090 --alsologtostderr: (2.859275903s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image load --daemon kicbase/echo-server:functional-113090 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-113090
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image load --daemon kicbase/echo-server:functional-113090 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image save kicbase/echo-server:functional-113090 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image rm kicbase/echo-server:functional-113090 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
2024/12/09 23:55:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-113090
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 image save --daemon kicbase/echo-server:functional-113090 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-113090
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-113090 /tmp/TestFunctionalparallelMountCmdspecific-port3203544953/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113090 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (285.988452ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 23:56:00.322943   15396 retry.go:31] will retry after 468.596311ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-113090 /tmp/TestFunctionalparallelMountCmdspecific-port3203544953/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113090 ssh "sudo umount -f /mount-9p": exit status 1 (261.828252ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-113090 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-113090 /tmp/TestFunctionalparallelMountCmdspecific-port3203544953/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-113090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup704133726/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-113090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup704133726/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-113090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup704133726/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113090 ssh "findmnt -T" /mount1: exit status 1 (350.459754ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 23:56:02.119067   15396 retry.go:31] will retry after 527.968199ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-113090 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-113090 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-113090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup704133726/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-113090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup704133726/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-113090 /tmp/TestFunctionalparallelMountCmdVerifyCleanup704133726/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-113090
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-113090
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-113090
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (106.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-076753 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1210 00:06:15.551856   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-076753 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m45.77187377s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (106.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-076753 -- rollout status deployment/busybox: (2.017925227s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-25ddb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-c44zf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-zk74n -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-25ddb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-c44zf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-zk74n -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-25ddb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-c44zf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-zk74n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-25ddb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-25ddb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-c44zf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-c44zf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-zk74n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-076753 -- exec busybox-7dff88458-zk74n -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (33.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-076753 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-076753 -v=7 --alsologtostderr: (32.180977315s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (33.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-076753 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp testdata/cp-test.txt ha-076753:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3131308998/001/cp-test_ha-076753.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753:/home/docker/cp-test.txt ha-076753-m02:/home/docker/cp-test_ha-076753_ha-076753-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m02 "sudo cat /home/docker/cp-test_ha-076753_ha-076753-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753:/home/docker/cp-test.txt ha-076753-m03:/home/docker/cp-test_ha-076753_ha-076753-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m03 "sudo cat /home/docker/cp-test_ha-076753_ha-076753-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753:/home/docker/cp-test.txt ha-076753-m04:/home/docker/cp-test_ha-076753_ha-076753-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m04 "sudo cat /home/docker/cp-test_ha-076753_ha-076753-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp testdata/cp-test.txt ha-076753-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3131308998/001/cp-test_ha-076753-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753-m02:/home/docker/cp-test.txt ha-076753:/home/docker/cp-test_ha-076753-m02_ha-076753.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753 "sudo cat /home/docker/cp-test_ha-076753-m02_ha-076753.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753-m02:/home/docker/cp-test.txt ha-076753-m03:/home/docker/cp-test_ha-076753-m02_ha-076753-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m03 "sudo cat /home/docker/cp-test_ha-076753-m02_ha-076753-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753-m02:/home/docker/cp-test.txt ha-076753-m04:/home/docker/cp-test_ha-076753-m02_ha-076753-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m04 "sudo cat /home/docker/cp-test_ha-076753-m02_ha-076753-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp testdata/cp-test.txt ha-076753-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3131308998/001/cp-test_ha-076753-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753-m03:/home/docker/cp-test.txt ha-076753:/home/docker/cp-test_ha-076753-m03_ha-076753.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753 "sudo cat /home/docker/cp-test_ha-076753-m03_ha-076753.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753-m03:/home/docker/cp-test.txt ha-076753-m02:/home/docker/cp-test_ha-076753-m03_ha-076753-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m02 "sudo cat /home/docker/cp-test_ha-076753-m03_ha-076753-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753-m03:/home/docker/cp-test.txt ha-076753-m04:/home/docker/cp-test_ha-076753-m03_ha-076753-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m04 "sudo cat /home/docker/cp-test_ha-076753-m03_ha-076753-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp testdata/cp-test.txt ha-076753-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3131308998/001/cp-test_ha-076753-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753-m04:/home/docker/cp-test.txt ha-076753:/home/docker/cp-test_ha-076753-m04_ha-076753.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753 "sudo cat /home/docker/cp-test_ha-076753-m04_ha-076753.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753-m04:/home/docker/cp-test.txt ha-076753-m02:/home/docker/cp-test_ha-076753-m04_ha-076753-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m02 "sudo cat /home/docker/cp-test_ha-076753-m04_ha-076753-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 cp ha-076753-m04:/home/docker/cp-test.txt ha-076753-m03:/home/docker/cp-test_ha-076753-m04_ha-076753-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 ssh -n ha-076753-m03 "sudo cat /home/docker/cp-test_ha-076753-m04_ha-076753-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-076753 node stop m02 -v=7 --alsologtostderr: (11.841478345s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076753 status -v=7 --alsologtostderr: exit status 7 (661.606952ms)

                                                
                                                
-- stdout --
	ha-076753
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076753-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-076753-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-076753-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:08:57.238281   83544 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:08:57.238428   83544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:08:57.238441   83544 out.go:358] Setting ErrFile to fd 2...
	I1210 00:08:57.238448   83544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:08:57.238640   83544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
	I1210 00:08:57.238810   83544 out.go:352] Setting JSON to false
	I1210 00:08:57.238840   83544 mustload.go:65] Loading cluster: ha-076753
	I1210 00:08:57.238889   83544 notify.go:220] Checking for updates...
	I1210 00:08:57.239270   83544 config.go:182] Loaded profile config "ha-076753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:08:57.239289   83544 status.go:174] checking status of ha-076753 ...
	I1210 00:08:57.239724   83544 cli_runner.go:164] Run: docker container inspect ha-076753 --format={{.State.Status}}
	I1210 00:08:57.259124   83544 status.go:371] ha-076753 host status = "Running" (err=<nil>)
	I1210 00:08:57.259161   83544 host.go:66] Checking if "ha-076753" exists ...
	I1210 00:08:57.259477   83544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-076753
	I1210 00:08:57.277858   83544 host.go:66] Checking if "ha-076753" exists ...
	I1210 00:08:57.278194   83544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:08:57.278280   83544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-076753
	I1210 00:08:57.295943   83544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/ha-076753/id_rsa Username:docker}
	I1210 00:08:57.384681   83544 ssh_runner.go:195] Run: systemctl --version
	I1210 00:08:57.388796   83544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:08:57.399720   83544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 00:08:57.450427   83544 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-12-10 00:08:57.441380198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 00:08:57.450981   83544 kubeconfig.go:125] found "ha-076753" server: "https://192.168.49.254:8443"
	I1210 00:08:57.451009   83544 api_server.go:166] Checking apiserver status ...
	I1210 00:08:57.451046   83544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:08:57.461894   83544 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1476/cgroup
	I1210 00:08:57.470952   83544 api_server.go:182] apiserver freezer: "13:freezer:/docker/e0715ebd3e32d9a5d213f1c382b4a622e1665abb7b94025e0f12f60f9c2cc8c5/crio/crio-7ef60e7e2b7b05710f95f18adfcfe4489943258c57b9d9fc8d7c697042bb10b6"
	I1210 00:08:57.471066   83544 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e0715ebd3e32d9a5d213f1c382b4a622e1665abb7b94025e0f12f60f9c2cc8c5/crio/crio-7ef60e7e2b7b05710f95f18adfcfe4489943258c57b9d9fc8d7c697042bb10b6/freezer.state
	I1210 00:08:57.479170   83544 api_server.go:204] freezer state: "THAWED"
	I1210 00:08:57.479199   83544 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 00:08:57.482921   83544 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 00:08:57.482946   83544 status.go:463] ha-076753 apiserver status = Running (err=<nil>)
	I1210 00:08:57.482959   83544 status.go:176] ha-076753 status: &{Name:ha-076753 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:08:57.482977   83544 status.go:174] checking status of ha-076753-m02 ...
	I1210 00:08:57.483221   83544 cli_runner.go:164] Run: docker container inspect ha-076753-m02 --format={{.State.Status}}
	I1210 00:08:57.500982   83544 status.go:371] ha-076753-m02 host status = "Stopped" (err=<nil>)
	I1210 00:08:57.501012   83544 status.go:384] host is not running, skipping remaining checks
	I1210 00:08:57.501021   83544 status.go:176] ha-076753-m02 status: &{Name:ha-076753-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:08:57.501047   83544 status.go:174] checking status of ha-076753-m03 ...
	I1210 00:08:57.501345   83544 cli_runner.go:164] Run: docker container inspect ha-076753-m03 --format={{.State.Status}}
	I1210 00:08:57.520725   83544 status.go:371] ha-076753-m03 host status = "Running" (err=<nil>)
	I1210 00:08:57.520757   83544 host.go:66] Checking if "ha-076753-m03" exists ...
	I1210 00:08:57.521054   83544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-076753-m03
	I1210 00:08:57.539340   83544 host.go:66] Checking if "ha-076753-m03" exists ...
	I1210 00:08:57.539729   83544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:08:57.539783   83544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-076753-m03
	I1210 00:08:57.559300   83544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/ha-076753-m03/id_rsa Username:docker}
	I1210 00:08:57.648635   83544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:08:57.660031   83544 kubeconfig.go:125] found "ha-076753" server: "https://192.168.49.254:8443"
	I1210 00:08:57.660058   83544 api_server.go:166] Checking apiserver status ...
	I1210 00:08:57.660093   83544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:08:57.670516   83544 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1439/cgroup
	I1210 00:08:57.679714   83544 api_server.go:182] apiserver freezer: "13:freezer:/docker/5e8f89d1808a787e7ada74a3c0d0d024320260b8626d67e00a6158d13f28292c/crio/crio-4dd4b3ea714f9ced25f479bc3e3bbe14d01da051194d27a9eddcb8a0130171bc"
	I1210 00:08:57.679766   83544 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5e8f89d1808a787e7ada74a3c0d0d024320260b8626d67e00a6158d13f28292c/crio/crio-4dd4b3ea714f9ced25f479bc3e3bbe14d01da051194d27a9eddcb8a0130171bc/freezer.state
	I1210 00:08:57.687670   83544 api_server.go:204] freezer state: "THAWED"
	I1210 00:08:57.687701   83544 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1210 00:08:57.691275   83544 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1210 00:08:57.691303   83544 status.go:463] ha-076753-m03 apiserver status = Running (err=<nil>)
	I1210 00:08:57.691314   83544 status.go:176] ha-076753-m03 status: &{Name:ha-076753-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:08:57.691335   83544 status.go:174] checking status of ha-076753-m04 ...
	I1210 00:08:57.691649   83544 cli_runner.go:164] Run: docker container inspect ha-076753-m04 --format={{.State.Status}}
	I1210 00:08:57.709482   83544 status.go:371] ha-076753-m04 host status = "Running" (err=<nil>)
	I1210 00:08:57.709505   83544 host.go:66] Checking if "ha-076753-m04" exists ...
	I1210 00:08:57.709732   83544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-076753-m04
	I1210 00:08:57.728647   83544 host.go:66] Checking if "ha-076753-m04" exists ...
	I1210 00:08:57.728978   83544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:08:57.729021   83544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-076753-m04
	I1210 00:08:57.748479   83544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/ha-076753-m04/id_rsa Username:docker}
	I1210 00:08:57.840366   83544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:08:57.850988   83544 status.go:176] ha-076753-m04 status: &{Name:ha-076753-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (25.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-076753 node start m02 -v=7 --alsologtostderr: (23.898931788s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-076753 status -v=7 --alsologtostderr: (1.136109091s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (25.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.019889988s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (166.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-076753 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-076753 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-076753 -v=7 --alsologtostderr: (36.500517787s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-076753 --wait=true -v=7 --alsologtostderr
E1210 00:10:36.281695   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:36.288083   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:36.299485   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:36.320915   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:36.362374   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:36.443902   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:36.605437   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:36.926858   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:37.568572   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:38.850687   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:41.412748   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:46.534198   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:56.776065   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:11:15.552101   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:11:17.257968   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:11:58.220197   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-076753 --wait=true -v=7 --alsologtostderr: (2m10.064024438s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-076753
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (166.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-076753 node delete m03 -v=7 --alsologtostderr: (10.581083299s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 stop -v=7 --alsologtostderr
E1210 00:12:38.617171   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-076753 stop -v=7 --alsologtostderr: (35.500904072s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-076753 status -v=7 --alsologtostderr: exit status 7 (102.745844ms)

                                                
                                                
-- stdout --
	ha-076753
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-076753-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-076753-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:12:58.865676  100709 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:12:58.866188  100709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:12:58.866199  100709 out.go:358] Setting ErrFile to fd 2...
	I1210 00:12:58.866203  100709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:12:58.866415  100709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
	I1210 00:12:58.866664  100709 out.go:352] Setting JSON to false
	I1210 00:12:58.866692  100709 mustload.go:65] Loading cluster: ha-076753
	I1210 00:12:58.866726  100709 notify.go:220] Checking for updates...
	I1210 00:12:58.867108  100709 config.go:182] Loaded profile config "ha-076753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:12:58.867127  100709 status.go:174] checking status of ha-076753 ...
	I1210 00:12:58.867635  100709 cli_runner.go:164] Run: docker container inspect ha-076753 --format={{.State.Status}}
	I1210 00:12:58.885719  100709 status.go:371] ha-076753 host status = "Stopped" (err=<nil>)
	I1210 00:12:58.885767  100709 status.go:384] host is not running, skipping remaining checks
	I1210 00:12:58.885776  100709 status.go:176] ha-076753 status: &{Name:ha-076753 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:12:58.885810  100709 status.go:174] checking status of ha-076753-m02 ...
	I1210 00:12:58.886169  100709 cli_runner.go:164] Run: docker container inspect ha-076753-m02 --format={{.State.Status}}
	I1210 00:12:58.903974  100709 status.go:371] ha-076753-m02 host status = "Stopped" (err=<nil>)
	I1210 00:12:58.904014  100709 status.go:384] host is not running, skipping remaining checks
	I1210 00:12:58.904023  100709 status.go:176] ha-076753-m02 status: &{Name:ha-076753-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:12:58.904055  100709 status.go:174] checking status of ha-076753-m04 ...
	I1210 00:12:58.904338  100709 cli_runner.go:164] Run: docker container inspect ha-076753-m04 --format={{.State.Status}}
	I1210 00:12:58.922031  100709 status.go:371] ha-076753-m04 host status = "Stopped" (err=<nil>)
	I1210 00:12:58.922057  100709 status.go:384] host is not running, skipping remaining checks
	I1210 00:12:58.922064  100709 status.go:176] ha-076753-m04 status: &{Name:ha-076753-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (119.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-076753 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1210 00:13:20.143660   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-076753 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m58.23989515s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (119.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-076753 --control-plane -v=7 --alsologtostderr
E1210 00:15:36.281960   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-076753 --control-plane -v=7 --alsologtostderr: (40.712423358s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-076753 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-264239 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1210 00:16:03.985616   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:16:15.551722   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-264239 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (39.91019884s)
--- PASS: TestJSONOutput/start/Command (39.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-264239 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-264239 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-264239 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-264239 --output=json --user=testUser: (5.767441106s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-150343 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-150343 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (70.254905ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"178ba957-363c-434a-9aeb-581412c46297","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-150343] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5854aac-588b-4a85-8bcc-4f7ca749288c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20062"}}
	{"specversion":"1.0","id":"a1907232-53f4-451f-9cff-98cffee3523c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c0a34db0-75ea-4581-8c49-5c81896c952a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig"}}
	{"specversion":"1.0","id":"35827133-3de5-4d70-85a1-2d4bbe0c3930","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube"}}
	{"specversion":"1.0","id":"3e39d6e2-bdc2-4856-b26b-bb61dc45c653","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a7fca8dc-685d-4250-ae9c-0a55d510bdf7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"897ce09a-77b8-4c4a-80c2-5c5da53d4f03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-150343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-150343
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.59s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-070447 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-070447 --network=: (27.569225127s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-070447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-070447
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-070447: (1.998774108s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.59s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-357200 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-357200 --network=bridge: (21.314173624s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-357200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-357200
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-357200: (1.865184768s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.20s)

                                                
                                    
x
+
TestKicExistingNetwork (25.69s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1210 00:17:32.643774   15396 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1210 00:17:32.661254   15396 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1210 00:17:32.661323   15396 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1210 00:17:32.661342   15396 cli_runner.go:164] Run: docker network inspect existing-network
W1210 00:17:32.678826   15396 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1210 00:17:32.678857   15396 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1210 00:17:32.678869   15396 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1210 00:17:32.679118   15396 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 00:17:32.697254   15396 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-18963c7e2bd7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:62:de:5c:69} reservation:<nil>}
I1210 00:17:32.697786   15396 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000714240}
I1210 00:17:32.697815   15396 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1210 00:17:32.697870   15396 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1210 00:17:32.762089   15396 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-112654 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-112654 --network=existing-network: (23.602707758s)
helpers_test.go:175: Cleaning up "existing-network-112654" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-112654
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-112654: (1.938085829s)
I1210 00:17:58.320907   15396 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.69s)

                                                
                                    
x
+
TestKicCustomSubnet (26.94s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-109140 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-109140 --subnet=192.168.60.0/24: (24.847374563s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-109140 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-109140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-109140
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-109140: (2.071245068s)
--- PASS: TestKicCustomSubnet (26.94s)

                                                
                                    
x
+
TestKicStaticIP (26.69s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-641533 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-641533 --static-ip=192.168.200.200: (24.575128919s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-641533 ip
helpers_test.go:175: Cleaning up "static-ip-641533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-641533
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-641533: (1.991610703s)
--- PASS: TestKicStaticIP (26.69s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (49.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-215774 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-215774 --driver=docker  --container-runtime=crio: (23.334466972s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-230316 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-230316 --driver=docker  --container-runtime=crio: (20.813731528s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-215774
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-230316
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-230316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-230316
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-230316: (1.838553192s)
helpers_test.go:175: Cleaning up "first-215774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-215774
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-215774: (2.23125348s)
--- PASS: TestMinikubeProfile (49.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-002505 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-002505 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.386353466s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-002505 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-019469 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-019469 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.392739262s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-019469 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-002505 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-002505 --alsologtostderr -v=5: (1.594104617s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-019469 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-019469
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-019469: (1.176446693s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-019469
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-019469: (6.22078512s)
--- PASS: TestMountStart/serial/RestartStopped (7.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-019469 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-331733 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1210 00:20:36.281452   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-331733 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m10.190618978s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
E1210 00:21:15.551772   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-331733 -- rollout status deployment/busybox: (3.562675595s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- exec busybox-7dff88458-5tt7k -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- exec busybox-7dff88458-gg29t -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- exec busybox-7dff88458-5tt7k -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- exec busybox-7dff88458-gg29t -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- exec busybox-7dff88458-5tt7k -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- exec busybox-7dff88458-gg29t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- exec busybox-7dff88458-5tt7k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- exec busybox-7dff88458-5tt7k -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- exec busybox-7dff88458-gg29t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331733 -- exec busybox-7dff88458-gg29t -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-331733 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-331733 -v 3 --alsologtostderr: (26.942943588s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.54s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-331733 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 cp testdata/cp-test.txt multinode-331733:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 cp multinode-331733:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile991999657/001/cp-test_multinode-331733.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 cp multinode-331733:/home/docker/cp-test.txt multinode-331733-m02:/home/docker/cp-test_multinode-331733_multinode-331733-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733-m02 "sudo cat /home/docker/cp-test_multinode-331733_multinode-331733-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 cp multinode-331733:/home/docker/cp-test.txt multinode-331733-m03:/home/docker/cp-test_multinode-331733_multinode-331733-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733-m03 "sudo cat /home/docker/cp-test_multinode-331733_multinode-331733-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 cp testdata/cp-test.txt multinode-331733-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 cp multinode-331733-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile991999657/001/cp-test_multinode-331733-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 cp multinode-331733-m02:/home/docker/cp-test.txt multinode-331733:/home/docker/cp-test_multinode-331733-m02_multinode-331733.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733 "sudo cat /home/docker/cp-test_multinode-331733-m02_multinode-331733.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 cp multinode-331733-m02:/home/docker/cp-test.txt multinode-331733-m03:/home/docker/cp-test_multinode-331733-m02_multinode-331733-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733-m03 "sudo cat /home/docker/cp-test_multinode-331733-m02_multinode-331733-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 cp testdata/cp-test.txt multinode-331733-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 cp multinode-331733-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile991999657/001/cp-test_multinode-331733-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 cp multinode-331733-m03:/home/docker/cp-test.txt multinode-331733:/home/docker/cp-test_multinode-331733-m03_multinode-331733.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733 "sudo cat /home/docker/cp-test_multinode-331733-m03_multinode-331733.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 cp multinode-331733-m03:/home/docker/cp-test.txt multinode-331733-m02:/home/docker/cp-test_multinode-331733-m03_multinode-331733-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 ssh -n multinode-331733-m02 "sudo cat /home/docker/cp-test_multinode-331733-m03_multinode-331733-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-331733 node stop m03: (1.175191707s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-331733 status: exit status 7 (455.375991ms)

                                                
                                                
-- stdout --
	multinode-331733
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-331733-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-331733-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-331733 status --alsologtostderr: exit status 7 (468.949706ms)

                                                
                                                
-- stdout --
	multinode-331733
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-331733-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-331733-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:22:00.000099  167073 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:22:00.000320  167073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:22:00.000336  167073 out.go:358] Setting ErrFile to fd 2...
	I1210 00:22:00.000343  167073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:22:00.000544  167073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
	I1210 00:22:00.000723  167073 out.go:352] Setting JSON to false
	I1210 00:22:00.000755  167073 mustload.go:65] Loading cluster: multinode-331733
	I1210 00:22:00.000883  167073 notify.go:220] Checking for updates...
	I1210 00:22:00.001318  167073 config.go:182] Loaded profile config "multinode-331733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:22:00.001349  167073 status.go:174] checking status of multinode-331733 ...
	I1210 00:22:00.002058  167073 cli_runner.go:164] Run: docker container inspect multinode-331733 --format={{.State.Status}}
	I1210 00:22:00.021176  167073 status.go:371] multinode-331733 host status = "Running" (err=<nil>)
	I1210 00:22:00.021215  167073 host.go:66] Checking if "multinode-331733" exists ...
	I1210 00:22:00.021492  167073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-331733
	I1210 00:22:00.039861  167073 host.go:66] Checking if "multinode-331733" exists ...
	I1210 00:22:00.040171  167073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:22:00.040214  167073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-331733
	I1210 00:22:00.058212  167073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/multinode-331733/id_rsa Username:docker}
	I1210 00:22:00.152922  167073 ssh_runner.go:195] Run: systemctl --version
	I1210 00:22:00.157320  167073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:22:00.168190  167073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 00:22:00.215560  167073 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-12-10 00:22:00.20610292 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 00:22:00.216377  167073 kubeconfig.go:125] found "multinode-331733" server: "https://192.168.67.2:8443"
	I1210 00:22:00.216411  167073 api_server.go:166] Checking apiserver status ...
	I1210 00:22:00.216455  167073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:22:00.226927  167073 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1504/cgroup
	I1210 00:22:00.236024  167073 api_server.go:182] apiserver freezer: "13:freezer:/docker/e2530fa2e03d87b5ba79ca633a8595a77500370c518a8cc2bd66f9a4bc3c316a/crio/crio-a3bf22459486213c44108ba06b44cbc940437d6381d2248f5199cc5e1f889e39"
	I1210 00:22:00.236119  167073 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e2530fa2e03d87b5ba79ca633a8595a77500370c518a8cc2bd66f9a4bc3c316a/crio/crio-a3bf22459486213c44108ba06b44cbc940437d6381d2248f5199cc5e1f889e39/freezer.state
	I1210 00:22:00.244136  167073 api_server.go:204] freezer state: "THAWED"
	I1210 00:22:00.244162  167073 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1210 00:22:00.247920  167073 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1210 00:22:00.247943  167073 status.go:463] multinode-331733 apiserver status = Running (err=<nil>)
	I1210 00:22:00.247952  167073 status.go:176] multinode-331733 status: &{Name:multinode-331733 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:22:00.247966  167073 status.go:174] checking status of multinode-331733-m02 ...
	I1210 00:22:00.248188  167073 cli_runner.go:164] Run: docker container inspect multinode-331733-m02 --format={{.State.Status}}
	I1210 00:22:00.265616  167073 status.go:371] multinode-331733-m02 host status = "Running" (err=<nil>)
	I1210 00:22:00.265640  167073 host.go:66] Checking if "multinode-331733-m02" exists ...
	I1210 00:22:00.265874  167073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-331733-m02
	I1210 00:22:00.282756  167073 host.go:66] Checking if "multinode-331733-m02" exists ...
	I1210 00:22:00.282990  167073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:22:00.283021  167073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-331733-m02
	I1210 00:22:00.301036  167073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/20062-8617/.minikube/machines/multinode-331733-m02/id_rsa Username:docker}
	I1210 00:22:00.388484  167073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:22:00.399125  167073 status.go:176] multinode-331733-m02 status: &{Name:multinode-331733-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:22:00.399166  167073 status.go:174] checking status of multinode-331733-m03 ...
	I1210 00:22:00.399416  167073 cli_runner.go:164] Run: docker container inspect multinode-331733-m03 --format={{.State.Status}}
	I1210 00:22:00.417104  167073 status.go:371] multinode-331733-m03 host status = "Stopped" (err=<nil>)
	I1210 00:22:00.417129  167073 status.go:384] host is not running, skipping remaining checks
	I1210 00:22:00.417144  167073 status.go:176] multinode-331733-m03 status: &{Name:multinode-331733-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.10s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-331733 node start m03 -v=7 --alsologtostderr: (8.286710075s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (100.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-331733
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-331733
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-331733: (24.675382781s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-331733 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-331733 --wait=true -v=8 --alsologtostderr: (1m15.754846093s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-331733
--- PASS: TestMultiNode/serial/RestartKeepsNodes (100.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-331733 node delete m03: (4.660496626s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-331733 stop: (23.520751825s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-331733 status: exit status 7 (91.611348ms)

                                                
                                                
-- stdout --
	multinode-331733
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-331733-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-331733 status --alsologtostderr: exit status 7 (88.387732ms)

                                                
                                                
-- stdout --
	multinode-331733
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-331733-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:24:18.756788  176751 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:24:18.757047  176751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:24:18.757057  176751 out.go:358] Setting ErrFile to fd 2...
	I1210 00:24:18.757061  176751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:24:18.757236  176751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
	I1210 00:24:18.757410  176751 out.go:352] Setting JSON to false
	I1210 00:24:18.757438  176751 mustload.go:65] Loading cluster: multinode-331733
	I1210 00:24:18.757473  176751 notify.go:220] Checking for updates...
	I1210 00:24:18.757779  176751 config.go:182] Loaded profile config "multinode-331733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:24:18.757798  176751 status.go:174] checking status of multinode-331733 ...
	I1210 00:24:18.758199  176751 cli_runner.go:164] Run: docker container inspect multinode-331733 --format={{.State.Status}}
	I1210 00:24:18.777941  176751 status.go:371] multinode-331733 host status = "Stopped" (err=<nil>)
	I1210 00:24:18.777964  176751 status.go:384] host is not running, skipping remaining checks
	I1210 00:24:18.777971  176751 status.go:176] multinode-331733 status: &{Name:multinode-331733 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:24:18.777998  176751 status.go:174] checking status of multinode-331733-m02 ...
	I1210 00:24:18.778269  176751 cli_runner.go:164] Run: docker container inspect multinode-331733-m02 --format={{.State.Status}}
	I1210 00:24:18.796126  176751 status.go:371] multinode-331733-m02 host status = "Stopped" (err=<nil>)
	I1210 00:24:18.796157  176751 status.go:384] host is not running, skipping remaining checks
	I1210 00:24:18.796170  176751 status.go:176] multinode-331733-m02 status: &{Name:multinode-331733-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-331733 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-331733 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (46.655946084s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331733 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.23s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-331733
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-331733-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-331733-m02 --driver=docker  --container-runtime=crio: exit status 14 (71.880623ms)

                                                
                                                
-- stdout --
	* [multinode-331733-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-331733-m02' is duplicated with machine name 'multinode-331733-m02' in profile 'multinode-331733'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-331733-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-331733-m03 --driver=docker  --container-runtime=crio: (23.717950759s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-331733
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-331733: exit status 80 (273.707062ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-331733 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-331733-m03 already exists in multinode-331733-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-331733-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-331733-m03: (1.864750149s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.98s)

                                                
                                    
x
+
TestPreload (105.99s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-296851 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1210 00:25:36.281476   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:15.551304   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-296851 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m16.965480667s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-296851 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-296851 image pull gcr.io/k8s-minikube/busybox: (3.231340722s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-296851
E1210 00:26:59.354092   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-296851: (5.680784843s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-296851 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-296851 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (17.631102099s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-296851 image list
helpers_test.go:175: Cleaning up "test-preload-296851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-296851
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-296851: (2.257129092s)
--- PASS: TestPreload (105.99s)

                                                
                                    
x
+
TestScheduledStopUnix (98.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-195345 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-195345 --memory=2048 --driver=docker  --container-runtime=crio: (23.469008068s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-195345 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-195345 -n scheduled-stop-195345
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-195345 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1210 00:27:45.773216   15396 retry.go:31] will retry after 135.057µs: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
I1210 00:27:45.774382   15396 retry.go:31] will retry after 112.141µs: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
I1210 00:27:45.775569   15396 retry.go:31] will retry after 298.8µs: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
I1210 00:27:45.776725   15396 retry.go:31] will retry after 435.51µs: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
I1210 00:27:45.777884   15396 retry.go:31] will retry after 384.72µs: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
I1210 00:27:45.779023   15396 retry.go:31] will retry after 537.911µs: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
I1210 00:27:45.780153   15396 retry.go:31] will retry after 964.861µs: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
I1210 00:27:45.781330   15396 retry.go:31] will retry after 2.127821ms: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
I1210 00:27:45.784525   15396 retry.go:31] will retry after 2.46971ms: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
I1210 00:27:45.787725   15396 retry.go:31] will retry after 3.92091ms: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
I1210 00:27:45.791921   15396 retry.go:31] will retry after 7.857746ms: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
I1210 00:27:45.800176   15396 retry.go:31] will retry after 6.12983ms: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
I1210 00:27:45.807433   15396 retry.go:31] will retry after 13.281182ms: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
I1210 00:27:45.821669   15396 retry.go:31] will retry after 14.032646ms: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
I1210 00:27:45.835850   15396 retry.go:31] will retry after 40.882714ms: open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/scheduled-stop-195345/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-195345 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-195345 -n scheduled-stop-195345
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-195345
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-195345 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-195345
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-195345: exit status 7 (65.471786ms)

                                                
                                                
-- stdout --
	scheduled-stop-195345
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-195345 -n scheduled-stop-195345
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-195345 -n scheduled-stop-195345: exit status 7 (66.855563ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-195345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-195345
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-195345: (4.006614337s)
--- PASS: TestScheduledStopUnix (98.80s)

                                                
                                    
x
+
TestInsufficientStorage (9.87s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-197913 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-197913 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.500783348s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3a7e03af-d06c-4acf-9d1a-880452e8c367","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-197913] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b6a646ec-e635-4417-ace8-788b7258d988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20062"}}
	{"specversion":"1.0","id":"c7901469-603c-48e2-a5f0-909a5e1c9472","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9b0fa021-66fb-41ad-a8c6-54343755f098","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig"}}
	{"specversion":"1.0","id":"4fbd47cf-a772-4a2a-9eff-947ff95d73b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube"}}
	{"specversion":"1.0","id":"c02d06e9-cf57-455c-82ee-036d5f3e70b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b1275b21-47e6-49f0-8d49-c13029edff7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"27306f53-ce0b-463a-816d-b905ae275f13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a4be2381-406f-4bc3-a254-a5bc3e6173fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b586e0b5-e0a3-4e12-b721-1d65eaa85123","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d2c28ee-bd6d-41f3-afc4-b05aa28c7d26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0225c402-581f-495b-b5eb-47ecf3ecdfef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-197913\" primary control-plane node in \"insufficient-storage-197913\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bb445fb8-58c9-42e2-842b-183bd1c0ce95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1730888964-19917 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a1c8f52c-a496-4302-ba21-492faa6aa0fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0ae57c5a-b784-4792-af24-4a1a0412dd93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-197913 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-197913 --output=json --layout=cluster: exit status 7 (264.707254ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-197913","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-197913","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 00:29:08.457129  199247 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-197913" does not appear in /home/jenkins/minikube-integration/20062-8617/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-197913 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-197913 --output=json --layout=cluster: exit status 7 (258.609725ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-197913","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-197913","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 00:29:08.716529  199363 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-197913" does not appear in /home/jenkins/minikube-integration/20062-8617/kubeconfig
	E1210 00:29:08.726411  199363 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/insufficient-storage-197913/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-197913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-197913
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-197913: (1.842096813s)
--- PASS: TestInsufficientStorage (9.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (59.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.379816860 start -p running-upgrade-549393 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.379816860 start -p running-upgrade-549393 --memory=2200 --vm-driver=docker  --container-runtime=crio: (33.347800074s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-549393 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-549393 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.455357783s)
helpers_test.go:175: Cleaning up "running-upgrade-549393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-549393
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-549393: (2.453927031s)
--- PASS: TestRunningBinaryUpgrade (59.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-077211 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-077211 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.608018719s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-077211
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-077211: (1.235907618s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-077211 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-077211 status --format={{.Host}}: exit status 7 (96.518906ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-077211 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-077211 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.200788981s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-077211 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-077211 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-077211 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (73.68536ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-077211] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-077211
	    minikube start -p kubernetes-upgrade-077211 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0772112 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-077211 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-077211 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-077211 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.45524752s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-077211" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-077211
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-077211: (2.392856946s)
--- PASS: TestKubernetesUpgrade (354.12s)

                                                
                                    
x
+
TestMissingContainerUpgrade (132.12s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1721828655 start -p missing-upgrade-567956 --memory=2200 --driver=docker  --container-runtime=crio
E1210 00:29:18.618762   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1721828655 start -p missing-upgrade-567956 --memory=2200 --driver=docker  --container-runtime=crio: (1m2.422631372s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-567956
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-567956: (11.58373339s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-567956
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-567956 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1210 00:30:36.282196   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-567956 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (55.557401004s)
helpers_test.go:175: Cleaning up "missing-upgrade-567956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-567956
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-567956: (1.948330134s)
--- PASS: TestMissingContainerUpgrade (132.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-536620 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-536620 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (75.600136ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-536620] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-536620 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-536620 --driver=docker  --container-runtime=crio: (32.335148609s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-536620 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (95.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4160060585 start -p stopped-upgrade-563148 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4160060585 start -p stopped-upgrade-563148 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m2.254832568s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4160060585 -p stopped-upgrade-563148 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4160060585 -p stopped-upgrade-563148 stop: (4.856499223s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-563148 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-563148 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.376770251s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (95.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-536620 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-536620 --no-kubernetes --driver=docker  --container-runtime=crio: (11.623329248s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-536620 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-536620 status -o json: exit status 2 (342.810001ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-536620","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-536620
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-536620: (2.03715133s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (14.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-536620 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-536620 --no-kubernetes --driver=docker  --container-runtime=crio: (5.010564493s)
--- PASS: TestNoKubernetes/serial/Start (5.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-536620 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-536620 "sudo systemctl is-active --quiet service kubelet": exit status 1 (269.602488ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (7.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.028193749s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (6.189494735s)
--- PASS: TestNoKubernetes/serial/ProfileList (7.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-536620
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-536620: (1.25561946s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-536620 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-536620 --driver=docker  --container-runtime=crio: (7.293627669s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-536620 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-536620 "sudo systemctl is-active --quiet service kubelet": exit status 1 (332.942615ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestPause/serial/Start (48.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-005494 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-005494 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (48.203922727s)
--- PASS: TestPause/serial/Start (48.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-563148
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-563148: (1.014251962s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.23s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-005494 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-005494 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.208970495s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-128397 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-128397 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (161.516685ms)

                                                
                                                
-- stdout --
	* [false-128397] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:31:25.719764  237558 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:31:25.719880  237558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:31:25.719889  237558 out.go:358] Setting ErrFile to fd 2...
	I1210 00:31:25.719893  237558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:31:25.720129  237558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-8617/.minikube/bin
	I1210 00:31:25.720715  237558 out.go:352] Setting JSON to false
	I1210 00:31:25.721772  237558 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4433,"bootTime":1733786253,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:31:25.721876  237558 start.go:139] virtualization: kvm guest
	I1210 00:31:25.724447  237558 out.go:177] * [false-128397] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:31:25.725969  237558 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:31:25.725965  237558 notify.go:220] Checking for updates...
	I1210 00:31:25.728911  237558 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:31:25.730363  237558 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-8617/kubeconfig
	I1210 00:31:25.731819  237558 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-8617/.minikube
	I1210 00:31:25.733385  237558 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:31:25.734959  237558 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:31:25.737178  237558 config.go:182] Loaded profile config "force-systemd-env-363639": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:31:25.737335  237558 config.go:182] Loaded profile config "kubernetes-upgrade-077211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1210 00:31:25.737515  237558 config.go:182] Loaded profile config "pause-005494": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:31:25.737653  237558 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:31:25.762708  237558 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1210 00:31:25.762855  237558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1210 00:31:25.815123  237558 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:83 SystemTime:2024-12-10 00:31:25.803326272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1210 00:31:25.815232  237558 docker.go:318] overlay module found
	I1210 00:31:25.817386  237558 out.go:177] * Using the docker driver based on user configuration
	I1210 00:31:25.818774  237558 start.go:297] selected driver: docker
	I1210 00:31:25.818787  237558 start.go:901] validating driver "docker" against <nil>
	I1210 00:31:25.818798  237558 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:31:25.820839  237558 out.go:201] 
	W1210 00:31:25.822200  237558 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1210 00:31:25.823577  237558 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-128397 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-128397

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-128397

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-128397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-128397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-128397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-128397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-128397

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-128397

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-128397

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-128397

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-128397

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-128397" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-128397" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20062-8617/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 10 Dec 2024 00:31:27 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-005494
contexts:
- context:
cluster: pause-005494
extensions:
- extension:
last-update: Tue, 10 Dec 2024 00:31:27 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-005494
name: pause-005494
current-context: pause-005494
kind: Config
preferences: {}
users:
- name: pause-005494
user:
client-certificate: /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/pause-005494/client.crt
client-key: /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/pause-005494/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-128397

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-128397"

                                                
                                                
----------------------- debugLogs end: false-128397 [took: 3.50645169s] --------------------------------
helpers_test.go:175: Cleaning up "false-128397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-128397
--- PASS: TestNetworkPlugins/group/false (3.85s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-005494 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-005494 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-005494 --output=json --layout=cluster: exit status 2 (411.567729ms)

                                                
                                                
-- stdout --
	{"Name":"pause-005494","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-005494","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-005494 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-005494 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.11s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-005494 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-005494 --alsologtostderr -v=5: (5.10885716s)
--- PASS: TestPause/serial/DeletePaused (5.11s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (21.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (21.115211511s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-005494
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-005494: exit status 1 (17.10248ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-005494: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (21.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (135.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-107558 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-107558 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m15.411681537s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (135.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-775853 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-775853 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (54.735881068s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-775853 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cc6a133c-064e-4158-a686-5eebc0ea1d75] Pending
helpers_test.go:344: "busybox" [cc6a133c-064e-4158-a686-5eebc0ea1d75] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cc6a133c-064e-4158-a686-5eebc0ea1d75] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004053349s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-775853 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-775853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-775853 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-775853 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-775853 --alsologtostderr -v=3: (11.833733762s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-775853 -n no-preload-775853
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-775853 -n no-preload-775853: exit status 7 (74.770811ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-775853 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-775853 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-775853 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m22.623236733s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-775853 -n no-preload-775853
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-107558 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a02ce27d-5058-4ddb-a15b-75d6b2ffd960] Pending
helpers_test.go:344: "busybox" [a02ce27d-5058-4ddb-a15b-75d6b2ffd960] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a02ce27d-5058-4ddb-a15b-75d6b2ffd960] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003445084s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-107558 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-107558 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-107558 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-107558 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-107558 --alsologtostderr -v=3: (11.869338142s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107558 -n old-k8s-version-107558
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107558 -n old-k8s-version-107558: exit status 7 (79.258521ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-107558 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (132s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-107558 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-107558 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m11.690659164s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107558 -n old-k8s-version-107558
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (132.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-412639 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1210 00:35:36.281611   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-412639 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (45.091234677s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-412639 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [66c177a6-46ca-419a-b44b-3aea1829413d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [66c177a6-46ca-419a-b44b-3aea1829413d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.00443523s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-412639 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-412639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-412639 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-412639 --alsologtostderr -v=3
E1210 00:36:15.551758   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-412639 --alsologtostderr -v=3: (11.938054396s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412639 -n embed-certs-412639
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412639 -n embed-certs-412639: exit status 7 (74.675236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-412639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-412639 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-412639 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m21.877869981s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-412639 -n embed-certs-412639
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-902978 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-902978 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (43.896052987s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-474jq" [7ce41eb6-b32e-473c-9d1c-b1810023f39d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003859231s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-474jq" [7ce41eb6-b32e-473c-9d1c-b1810023f39d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003356613s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-107558 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-107558 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-107558 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-107558 -n old-k8s-version-107558
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-107558 -n old-k8s-version-107558: exit status 2 (288.183305ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-107558 -n old-k8s-version-107558
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-107558 -n old-k8s-version-107558: exit status 2 (290.117974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-107558 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-107558 -n old-k8s-version-107558
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-107558 -n old-k8s-version-107558
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-736528 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-736528 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (29.167556811s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-902978 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [72d08b5b-e80b-482b-8e1a-fb9d0a9d6338] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [72d08b5b-e80b-482b-8e1a-fb9d0a9d6338] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00393921s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-902978 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-902978 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-902978 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-902978 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-902978 --alsologtostderr -v=3: (11.879688047s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-902978 -n default-k8s-diff-port-902978
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-902978 -n default-k8s-diff-port-902978: exit status 7 (73.059614ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-902978 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (276.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-902978 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-902978 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m36.370549061s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-902978 -n default-k8s-diff-port-902978
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (276.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-736528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-736528 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-736528 --alsologtostderr -v=3: (1.196184501s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-736528 -n newest-cni-736528
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-736528 -n newest-cni-736528: exit status 7 (65.257014ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-736528 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-736528 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-736528 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (13.311841259s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-736528 -n newest-cni-736528
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-736528 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-736528 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-736528 --alsologtostderr -v=1: (1.182569095s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-736528 -n newest-cni-736528
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-736528 -n newest-cni-736528: exit status 2 (349.681721ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-736528 -n newest-cni-736528
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-736528 -n newest-cni-736528: exit status 2 (338.263506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-736528 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-736528 -n newest-cni-736528
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-736528 -n newest-cni-736528
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-128397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-128397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.355508624s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hwvnr" [4c83bda7-b9c4-4646-802f-7c2483bde284] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003365551s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hwvnr" [4c83bda7-b9c4-4646-802f-7c2483bde284] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003762075s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-775853 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-775853 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-775853 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-775853 -n no-preload-775853
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-775853 -n no-preload-775853: exit status 2 (288.282888ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-775853 -n no-preload-775853
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-775853 -n no-preload-775853: exit status 2 (285.102772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-775853 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-775853 -n no-preload-775853
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-775853 -n no-preload-775853
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (40.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-128397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-128397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (40.649068534s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (40.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-128397 "pgrep -a kubelet"
I1210 00:38:57.670258   15396 config.go:182] Loaded profile config "auto-128397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-128397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7g2rk" [8a2f2a82-9da9-4ccc-8173-dfb804275051] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7g2rk" [8a2f2a82-9da9-4ccc-8173-dfb804275051] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004269413s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-128397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-128397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-128397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-57tjx" [f2856546-c6ca-43c1-9b0e-0a486917e865] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004338176s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-128397 "pgrep -a kubelet"
I1210 00:39:25.123993   15396 config.go:182] Loaded profile config "kindnet-128397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-128397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mblbh" [d00ec71d-12c5-428f-a5d3-4d9c08ec6c9e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mblbh" [d00ec71d-12c5-428f-a5d3-4d9c08ec6c9e] Running
E1210 00:39:33.850334   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/old-k8s-version-107558/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:39:33.856774   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/old-k8s-version-107558/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:39:33.868193   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/old-k8s-version-107558/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:39:33.889658   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/old-k8s-version-107558/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:39:33.931124   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/old-k8s-version-107558/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:39:34.012541   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/old-k8s-version-107558/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:39:34.174363   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/old-k8s-version-107558/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:39:34.496492   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/old-k8s-version-107558/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:39:35.138505   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/old-k8s-version-107558/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003426131s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (59.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-128397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-128397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (59.156084806s)
--- PASS: TestNetworkPlugins/group/calico/Start (59.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-128397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-128397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-128397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (48.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-128397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1210 00:40:14.827159   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/old-k8s-version-107558/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-128397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (48.190861774s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (48.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gw8dq" [bf570a14-35e3-407b-910b-b6615e689b30] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003850737s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-128397 "pgrep -a kubelet"
I1210 00:40:30.638409   15396 config.go:182] Loaded profile config "calico-128397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-128397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f6hnm" [f12adf2e-3fd7-4631-8c13-371408c3dc61] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f6hnm" [f12adf2e-3fd7-4631-8c13-371408c3dc61] Running
E1210 00:40:36.281435   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/functional-113090/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004465735s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-128397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-128397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-128397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-128397 "pgrep -a kubelet"
I1210 00:40:43.349259   15396 config.go:182] Loaded profile config "custom-flannel-128397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-128397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7nn7l" [51905b6c-7cfa-4dc6-97f6-a1bb26f486b0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7nn7l" [51905b6c-7cfa-4dc6-97f6-a1bb26f486b0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004370439s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ntzj8" [428338e6-eea5-418c-a7c3-c3e04dfdb48c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003766327s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-128397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-128397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-128397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ntzj8" [428338e6-eea5-418c-a7c3-c3e04dfdb48c] Running
E1210 00:40:55.788589   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/old-k8s-version-107558/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004492881s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-412639 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-128397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-128397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m14.05167864s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-412639 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-412639 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412639 -n embed-certs-412639
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412639 -n embed-certs-412639: exit status 2 (331.227918ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-412639 -n embed-certs-412639
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-412639 -n embed-certs-412639: exit status 2 (356.394162ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-412639 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-412639 --alsologtostderr -v=1: (1.079345273s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-412639 -n embed-certs-412639
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-412639 -n embed-certs-412639
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-128397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-128397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (51.135250323s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (34.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-128397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1210 00:41:15.551924   15396 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/addons-701527/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-128397 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (34.249092045s)
--- PASS: TestNetworkPlugins/group/bridge/Start (34.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-128397 "pgrep -a kubelet"
I1210 00:41:48.855649   15396 config.go:182] Loaded profile config "bridge-128397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-128397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vw92h" [ff7947ed-d570-4f17-bcef-4cf7410651b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vw92h" [ff7947ed-d570-4f17-bcef-4cf7410651b2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004286612s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-4lkjw" [6fab7d15-c5cf-466f-bb74-7160400290cb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004932181s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-128397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-128397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-128397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-128397 "pgrep -a kubelet"
I1210 00:42:04.756738   15396 config.go:182] Loaded profile config "flannel-128397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-128397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jn22l" [1234002a-9346-48ba-a625-1a11a1e613d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jn22l" [1234002a-9346-48ba-a625-1a11a1e613d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004239222s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-128397 "pgrep -a kubelet"
I1210 00:42:13.683016   15396 config.go:182] Loaded profile config "enable-default-cni-128397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-128397 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hn6f2" [212e742e-21b9-4c8c-8659-91cc08d6f32d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hn6f2" [212e742e-21b9-4c8c-8659-91cc08d6f32d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004767063s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-128397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-128397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-128397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-128397 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-128397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-128397 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gh79x" [f9d0c0dc-5b27-4ab2-a69d-d53933454363] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005485298s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gh79x" [f9d0c0dc-5b27-4ab2-a69d-d53933454363] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00440296s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-902978 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-902978 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-902978 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-902978 -n default-k8s-diff-port-902978
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-902978 -n default-k8s-diff-port-902978: exit status 2 (283.255544ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-902978 -n default-k8s-diff-port-902978
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-902978 -n default-k8s-diff-port-902978: exit status 2 (295.081022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-902978 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-902978 -n default-k8s-diff-port-902978
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-902978 -n default-k8s-diff-port-902978
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.73s)

                                                
                                    

Test skip (26/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-701527 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-962621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-962621
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-128397 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-128397

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-128397

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-128397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-128397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-128397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-128397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-128397

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-128397

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-128397

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-128397

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-128397

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-128397" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-128397" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20062-8617/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 10 Dec 2024 00:30:51 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-005494
contexts:
- context:
cluster: pause-005494
extensions:
- extension:
last-update: Tue, 10 Dec 2024 00:30:51 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-005494
name: pause-005494
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-005494
user:
client-certificate: /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/pause-005494/client.crt
client-key: /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/pause-005494/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-128397

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-128397"

                                                
                                                
----------------------- debugLogs end: kubenet-128397 [took: 2.804878195s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-128397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-128397
--- SKIP: TestNetworkPlugins/group/kubenet (2.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-128397 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-128397" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20062-8617/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 10 Dec 2024 00:31:27 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-005494
contexts:
- context:
cluster: pause-005494
extensions:
- extension:
last-update: Tue, 10 Dec 2024 00:31:27 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-005494
name: pause-005494
current-context: pause-005494
kind: Config
preferences: {}
users:
- name: pause-005494
user:
client-certificate: /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/pause-005494/client.crt
client-key: /home/jenkins/minikube-integration/20062-8617/.minikube/profiles/pause-005494/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-128397

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-128397" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128397"

                                                
                                                
----------------------- debugLogs end: cilium-128397 [took: 4.470645783s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-128397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-128397
--- SKIP: TestNetworkPlugins/group/cilium (4.68s)

                                                
                                    
Copied to clipboard